WorldWideScience

Sample records for network reconstruction methods

  1. Quartet-based methods to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng

    2014-02-20

    Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.

  2. Efficient parsimony-based methods for phylogenetic network reconstruction.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-15

    Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.

  3. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Science.gov (United States)

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  4. A method of reconstructing the spatial measurement network by mobile measurement transmitter for shipbuilding

    International Nuclear Information System (INIS)

    Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin

    2017-01-01

    The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm. (paper)

  5. Quartet-net: a quartet-based method to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng

    2013-05-01

    Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.

  6. Neural network CT image reconstruction method for small amount of projection data

    CERN Document Server

    Ma, X F; Takeda, T

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.

  7. Neural network CT image reconstruction method for small amount of projection data

    International Nuclear Information System (INIS)

    Ma, X.F.; Fukuhara, M.; Takeda, T.

    2000-01-01

    This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications

  8. Stability indicators in network reconstruction.

    Directory of Open Access Journals (Sweden)

    Michele Filosi

    Full Text Available The number of available algorithms to infer a biological network from a dataset of high-throughput measurements is overwhelming and keeps growing. However, evaluating their performance is unfeasible unless a 'gold standard' is available to measure how close the reconstructed network is to the ground truth. One measure of this is the stability of these predictions to data resampling approaches. We introduce NetSI, a family of Network Stability Indicators, to assess quantitatively the stability of a reconstructed network in terms of inference variability due to data subsampling. In order to evaluate network stability, the main NetSI methods use a global/local network metric in combination with a resampling (bootstrap or cross-validation procedure. In addition, we provide two normalized variability scores over data resampling to measure edge weight stability and node degree stability, and then introduce a stability ranking for edges and nodes. A complete implementation of the NetSI indicators, including the Hamming-Ipsen-Mikhailov (HIM network distance adopted in this paper is available with the R package nettools. We demonstrate the use of the NetSI family by measuring network stability on four datasets against alternative network reconstruction methods. First, the effect of sample size on stability of inferred networks is studied in a gold standard framework on yeast-like data from the Gene Net Weaver simulator. We also consider the impact of varying modularity on a set of structurally different networks (50 nodes, from 2 to 10 modules, and then of complex feature covariance structure, showing the different behaviours of standard reconstruction methods based on Pearson correlation, Maximum Information Coefficient (MIC and False Discovery Rate (FDR strategy. Finally, we demonstrate a strong combined effect of different reconstruction methods and phenotype subgroups on a hepatocellular carcinoma miRNA microarray dataset (240 subjects, and we

  9. Indian-ink perfusion based method for reconstructing continuous vascular networks in whole mouse brain.

    Directory of Open Access Journals (Sweden)

    Songchao Xue

    Full Text Available The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm(3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously.

  10. Craniofacial Reconstruction Evaluation by Geodesic Network

    OpenAIRE

    Zhao, Junli; Liu, Cuiting; Wu, Zhongke; Duan, Fuqing; Wang, Kang; Jia, Taorui; Liu, Quansheng

    2014-01-01

    Craniofacial reconstruction is to estimate an individual’s face model from its skull. It has a widespread application in forensic medicine, archeology, medical cosmetic surgery, and so forth. However, little attention is paid to the evaluation of craniofacial reconstruction. This paper proposes an objective method to evaluate globally and locally the reconstructed craniofacial faces based on the geodesic network. Firstly, the geodesic networks of the reconstructed craniofacial face and the or...

  11. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  12. A fast and efficient gene-network reconstruction method from multiple over-expression experiments

    Directory of Open Access Journals (Sweden)

    Thurner Stefan

    2009-08-01

    Full Text Available Abstract Background Reverse engineering of gene regulatory networks presents one of the big challenges in systems biology. Gene regulatory networks are usually inferred from a set of single-gene over-expressions and/or knockout experiments. Functional relationships between genes are retrieved either from the steady state gene expressions or from respective time series. Results We present a novel algorithm for gene network reconstruction on the basis of steady-state gene-chip data from over-expression experiments. The algorithm is based on a straight forward solution of a linear gene-dynamics equation, where experimental data is fed in as a first predictor for the solution. We compare the algorithm's performance with the NIR algorithm, both on the well known E. coli experimental data and on in-silico experiments. Conclusion We show superiority of the proposed algorithm in the number of correctly reconstructed links and discuss computational time and robustness. The proposed algorithm is not limited by combinatorial explosion problems and can be used in principle for large networks.

  13. Network reconstruction via graph blending

    Science.gov (United States)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  14. A homologous mapping method for three-dimensional reconstruction of protein networks reveals disease-associated mutations.

    Science.gov (United States)

    Huang, Sing-Han; Lo, Yu-Shu; Luo, Yong-Chun; Tseng, Yu-Yao; Yang, Jinn-Moon

    2018-03-19

    One of the crucial steps toward understanding the associations among molecular interactions, pathways, and diseases in a cell is to investigate detailed atomic protein-protein interactions (PPIs) in the structural interactome. Despite the availability of large-scale methods for analyzing PPI networks, these methods often focused on PPI networks using genome-scale data and/or known experimental PPIs. However, these methods are unable to provide structurally resolved interaction residues and their conservations in PPI networks. Here, we reconstructed a human three-dimensional (3D) structural PPI network (hDiSNet) with the detailed atomic binding models and disease-associated mutations by enhancing our PPI families and 3D-domain interologs from 60,618 structural complexes and complete genome database with 6,352,363 protein sequences across 2274 species. hDiSNet is a scale-free network (γ = 2.05), which consists of 5177 proteins and 19,239 PPIs with 5843 mutations. These 19,239 structurally resolved PPIs not only expanded the number of PPIs compared to present structural PPI network, but also achieved higher agreement with gene ontology similarities and higher co-expression correlation than the ones of 181,868 experimental PPIs recorded in public databases. Among 5843 mutations, 1653 and 790 mutations involved in interacting domains and contacting residues, respectively, are highly related to diseases. Our hDiSNet can provide detailed atomic interactions of human disease and their associated proteins with mutations. Our results show that the disease-related mutations are often located at the contacting residues forming the hydrogen bonds or conserved in the PPI family. In addition, hDiSNet provides the insights of the FGFR (EGFR)-MAPK pathway for interpreting the mechanisms of breast cancer and ErbB signaling pathway in brain cancer. Our results demonstrate that hDiSNet can explore structural-based interactions insights for understanding the mechanisms of disease

  15. Craniofacial Reconstruction Evaluation by Geodesic Network

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Craniofacial reconstruction is to estimate an individual’s face model from its skull. It has a widespread application in forensic medicine, archeology, medical cosmetic surgery, and so forth. However, little attention is paid to the evaluation of craniofacial reconstruction. This paper proposes an objective method to evaluate globally and locally the reconstructed craniofacial faces based on the geodesic network. Firstly, the geodesic networks of the reconstructed craniofacial face and the original face are built, respectively, by geodesics and isogeodesics, whose intersections are network vertices. Then, the absolute value of the correlation coefficient of the features of all corresponding geodesic network vertices between two models is taken as the holistic similarity, where the weighted average of the shape index values in a neighborhood is defined as the feature of each network vertex. Moreover, the geodesic network vertices of each model are divided into six subareas, that is, forehead, eyes, nose, mouth, cheeks, and chin, and the local similarity is measured for each subarea. Experiments using 100 pairs of reconstructed craniofacial faces and their corresponding original faces show that the evaluation by our method is roughly consistent with the subjective evaluation derived from thirty-five persons in five groups.

  16. Mass reconstruction with a neural network

    International Nuclear Information System (INIS)

    Loennblad, L.; Peterson, C.; Roegnvaldsson, T.

    1992-01-01

    A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W→qanti q, where W-bosons are produced in panti p reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using 'intelligent' variables in instances when the amount of training instances is limited. (orig.)

  17. Study on Reverse Reconstruction Method of Vehicle Group Situation in Urban Road Network Based on Driver-Vehicle Feature Evolution

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2017-01-01

    Full Text Available Vehicle group situation is the status and situation of dynamic permutation which is composed of target vehicle and neighboring traffic entities. It is a concept which is frequently involved in the research of traffic flow theory, especially the active vehicle security. Studying vehicle group situation in depth is of great significance for traffic safety. Three-lane condition was taken as an example; the characteristics of target vehicle and its neighboring vehicles were synthetically considered to restructure the vehicle group situation in this paper. The Gamma distribution theory was used to identify the vehicle group situation when target vehicle arrived at the end of the study area. From the perspective of driver-vehicle feature evolution, the reverse reconstruction method of vehicle group situation in the urban road network was proposed. Results of actual driving, virtual driving, and simulation experiments showed that the model established in this paper was reasonable and feasible.

  18. Network Reconstruction of Dynamic Biological Systems

    OpenAIRE

    Asadi, Behrang

    2013-01-01

    Inference of network topology from experimental data is a central endeavor in biology, since knowledge of the underlying signaling mechanisms a requirement for understanding biological phenomena. As one of the most important tools in bioinformatics area, development of methods to reconstruct biological networks has attracted remarkable attention in the current decade. Integration of different data types can lead to remarkable improvements in our ability to identify the connectivity of differe...

  19. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, R.; Pentia, M.

    1997-01-01

    In experimental particle physics, pattern recognition problems, specifically for neural network methods, occur frequently in track finding or feature extraction. Track finding is a combinatorial optimization problem. Given a set of points in Euclidean space, one tries the reconstruction of particle trajectories, subject to smoothness constraints.The basic ingredients in a neural network are the N binary neurons and the synaptic strengths connecting them. In our case the neurons are the segments connecting all possible point pairs.The dynamics of the neural network is given by a local updating rule wich evaluates for each neuron the sign of the 'upstream activity'. An updating rule in the form of sigmoid function is given. The synaptic strengths are defined in terms of angle between the segments and the lengths of the segments implied in the track reconstruction. An algorithm based on Hopfield neural network has been developed and tested on the track coordinates measured by silicon microstrip tracking system

  20. Reconstruction of network topology using status-time-series data

    Science.gov (United States)

    Pandey, Pradumn Kumar; Badarla, Venkataramana

    2018-01-01

    Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.

  1. [Reconstructive methods after Fournier gangrene].

    Science.gov (United States)

    Wallner, C; Behr, B; Ring, A; Mikhail, B D; Lehnhardt, M; Daigeler, A

    2016-04-01

    Fournier's gangrene is a variant of the necrotizing fasciitis restricted to the perineal and genital region. It presents as an acute life-threatening disease and demands rapid surgical debridement, resulting in large soft tissue defects. Various reconstructive methods have to be applied to reconstitute functionality and aesthetics. The objective of this work is to identify different reconstructive methods in the literature and compare them to our current concepts for reconstructing defects caused by Fournier gangrene. Analysis of the current literature and our reconstructive methods on Fournier gangrene. The Fournier gangrene is an emergency requiring rapid, calculated antibiotic treatment and radical surgical debridement. After the acute phase of the disease, appropriate reconstructive methods are indicated. The planning of the reconstruction of the defect depends on many factors, especially functional and aesthetic demands. Scrotal reconstruction requires a higher aesthetic and functional reconstructive degree than perineal cutaneous wounds. In general, thorough wound hygiene, proper pre-operative planning, and careful consideration of the patient's demands are essential for successful reconstruction. In the literature, various methods for reconstruction after Fournier gangrene are described. Reconstruction with a flap is required for a good functional result in complex regions as the scrotum and penis, while cutaneous wounds can be managed through skin grafting. Patient compliance and tissue demand are crucial factors in the decision-making process.

  2. Tomographic image reconstruction using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Paschalis, P.; Giokaris, N.D.; Karabarbounis, A.; Loudos, G.K.; Maintas, D.; Papanicolas, C.N.; Spanoudaki, V.; Tsoumpas, Ch.; Stiliaris, E.

    2004-01-01

    A new image reconstruction technique based on the usage of an Artificial Neural Network (ANN) is presented. The most crucial factor in designing such a reconstruction system is the network architecture and the number of the input projections needed to reconstruct the image. Although the training phase requires a large amount of input samples and a considerable CPU time, the trained network is characterized by simplicity and quick response. The performance of this ANN is tested using several image patterns. It is intended to be used together with a phantom rotating table and the γ-camera of IASA for SPECT image reconstruction

  3. Reconstructing phylogenetic networks using maximum parsimony.

    Science.gov (United States)

    Nakhleh, Luay; Jin, Guohua; Zhao, Fengmei; Mellor-Crummey, John

    2005-01-01

    Phylogenies - the evolutionary histories of groups of organisms - are one of the most widely used tools throughout the life sciences, as well as objects of research within systematics, evolutionary biology, epidemiology, etc. Almost every tool devised to date to reconstruct phylogenies produces trees; yet it is widely understood and accepted that trees oversimplify the evolutionary histories of many groups of organims, most prominently bacteria (because of horizontal gene transfer) and plants (because of hybrid speciation). Various methods and criteria have been introduced for phylogenetic tree reconstruction. Parsimony is one of the most widely used and studied criteria, and various accurate and efficient heuristics for reconstructing trees based on parsimony have been devised. Jotun Hein suggested a straightforward extension of the parsimony criterion to phylogenetic networks. In this paper we formalize this concept, and provide the first experimental study of the quality of parsimony as a criterion for constructing and evaluating phylogenetic networks. Our results show that, when extended to phylogenetic networks, the parsimony criterion produces promising results. In a great majority of the cases in our experiments, the parsimony criterion accurately predicts the numbers and placements of non-tree events.

  4. Pseudo-proxy evaluation of climate field reconstruction methods of North Atlantic climate based on an annually resolved marine proxy network

    Directory of Open Access Journals (Sweden)

    M. Pyrina

    2017-10-01

    Full Text Available Two statistical methods are tested to reconstruct the interannual variations in past sea surface temperatures (SSTs of the North Atlantic (NA Ocean over the past millennium based on annually resolved and absolutely dated marine proxy records of the bivalve mollusk Arctica islandica. The methods are tested in a pseudo-proxy experiment (PPE setup using state-of-the-art climate models (CMIP5 Earth system models and reanalysis data from the COBE2 SST data set. The methods were applied in the virtual reality provided by global climate simulations and reanalysis data to reconstruct the past NA SSTs using pseudo-proxy records that mimic the statistical characteristics and network of Arctica islandica. The multivariate linear regression methods evaluated here are principal component regression and canonical correlation analysis. Differences in the skill of the climate field reconstruction (CFR are assessed according to different calibration periods and different proxy locations within the NA basin. The choice of the climate model used as a surrogate reality in the PPE has a more profound effect on the CFR skill than the calibration period and the statistical reconstruction method. The differences between the two methods are clearer for the MPI-ESM model due to its higher spatial resolution in the NA basin. The pseudo-proxy results of the CCSM4 model are closer to the pseudo-proxy results based on the reanalysis data set COBE2. Conducting PPEs using noise-contaminated pseudo-proxies instead of noise-free pseudo-proxies is important for the evaluation of the methods, as more spatial differences in the reconstruction skill are revealed. Both methods are appropriate for the reconstruction of the temporal evolution of the NA SSTs, even though they lead to a great loss of variance away from the proxy sites. Under reasonable assumptions about the characteristics of the non-climate noise in the proxy records, our results show that the marine network of Arctica

  5. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, Raluca; Pentia, Mircea

    1996-01-01

    This work uses neural network technique (Hopfield method) to reconstruct particle tracks starting from a data set obtained with a coordinate detector system placed around a high energy accelerated particle interaction region. A learning algorithm for finding the optimal connection of the signal points have been elaborated and tested. We used a single layer neutral network with constraints in order to obtain the particle tracks drawn through the detected signal points. The dynamics of the systems is given by the MFT equations which determine the system evolution to a minimum energy function. We carried out a computing program that has been tested on a lot of Monte Carlo simulated data. With this program we obtained good results even for noise/signal ratio 200. (authors)

  6. Reconstructible phylogenetic networks: do not distinguish the indistinguishable.

    Science.gov (United States)

    Pardi, Fabio; Scornavacca, Celine

    2015-04-01

    Phylogenetic networks represent the evolution of organisms that have undergone reticulate events, such as recombination, hybrid speciation or lateral gene transfer. An important way to interpret a phylogenetic network is in terms of the trees it displays, which represent all the possible histories of the characters carried by the organisms in the network. Interestingly, however, different networks may display exactly the same set of trees, an observation that poses a problem for network reconstruction: from the perspective of many inference methods such networks are "indistinguishable". This is true for all methods that evaluate a phylogenetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisations of maximum parsimony and maximum likelihood for networks. This identifiability problem is partially solved by accounting for branch lengths, although this merely reduces the frequency of the problem. Here we propose that network inference methods should only attempt to reconstruct what they can uniquely identify. To this end, we introduce a novel definition of what constitutes a uniquely reconstructible network. For any given set of indistinguishable networks, we define a canonical network that, under mild assumptions, is unique and thus representative of the entire set. Given data that underwent reticulate evolution, only the canonical form of the underlying phylogenetic network can be uniquely reconstructed. While on the methodological side this will imply a drastic reduction of the solution space in network inference, for the study of reticulate evolution this is a fundamental limitation that will require an important change of perspective when interpreting phylogenetic networks.

  7. Reconstructible phylogenetic networks: do not distinguish the indistinguishable.

    Directory of Open Access Journals (Sweden)

    Fabio Pardi

    2015-04-01

    Full Text Available Phylogenetic networks represent the evolution of organisms that have undergone reticulate events, such as recombination, hybrid speciation or lateral gene transfer. An important way to interpret a phylogenetic network is in terms of the trees it displays, which represent all the possible histories of the characters carried by the organisms in the network. Interestingly, however, different networks may display exactly the same set of trees, an observation that poses a problem for network reconstruction: from the perspective of many inference methods such networks are "indistinguishable". This is true for all methods that evaluate a phylogenetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisations of maximum parsimony and maximum likelihood for networks. This identifiability problem is partially solved by accounting for branch lengths, although this merely reduces the frequency of the problem. Here we propose that network inference methods should only attempt to reconstruct what they can uniquely identify. To this end, we introduce a novel definition of what constitutes a uniquely reconstructible network. For any given set of indistinguishable networks, we define a canonical network that, under mild assumptions, is unique and thus representative of the entire set. Given data that underwent reticulate evolution, only the canonical form of the underlying phylogenetic network can be uniquely reconstructed. While on the methodological side this will imply a drastic reduction of the solution space in network inference, for the study of reticulate evolution this is a fundamental limitation that will require an important change of perspective when interpreting phylogenetic networks.

  8. ASME method for particle reconstruction

    International Nuclear Information System (INIS)

    Ierusalimov, A.P.

    2009-01-01

    The method of approximate solution of motion equation (ASME) was used to reconstruct the parameters for charged particles. It provides a good precision for momentum, angular and space parameters of particles in coordinate detectors. The application of the method for CBM, HADES and MPD/NICA setups is discussed

  9. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  10. Genetic Network Programming with Reconstructed Individuals

    Science.gov (United States)

    Ye, Fengming; Mabu, Shingo; Wang, Lutao; Eto, Shinji; Hirasawa, Kotaro

    A lot of research on evolutionary computation has been done and some significant classical methods such as Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming (EP), and Evolution Strategies (ES) have been studied. Recently, a new approach named Genetic Network Programming (GNP) has been proposed. GNP can evolve itself and find the optimal solution. It is based on the idea of Genetic Algorithm and uses the data structure of directed graphs. Many papers have demonstrated that GNP can deal with complex problems in the dynamic environments very efficiently and effectively. As a result, recently, GNP is getting more and more attentions and is used in many different areas such as data mining, extracting trading rules of stock markets, elevator supervised control systems, etc., and GNP has obtained some outstanding results. Focusing on the GNP's distinguished expression ability of the graph structure, this paper proposes a method named Genetic Network Programming with Reconstructed Individuals (GNP-RI). The aim of GNP-RI is to balance the exploitation and exploration of GNP, that is, to strengthen the exploitation ability by using the exploited information extensively during the evolution process of GNP and finally obtain better performances than that of GNP. In the proposed method, the worse individuals are reconstructed and enhanced by the elite information before undergoing genetic operations (mutation and crossover). The enhancement of worse individuals mimics the maturing phenomenon in nature, where bad individuals can become smarter after receiving a good education. In this paper, GNP-RI is applied to the tile-world problem which is an excellent bench mark for evaluating the proposed architecture. The performance of GNP-RI is compared with that of the conventional GNP. The simulation results show some advantages of GNP-RI demonstrating its superiority over the conventional GNPs.

  11. On the Complexity of Reconstructing Chemical Reaction Networks

    DEFF Research Database (Denmark)

    Fagerberg, Rolf; Flamm, Christoph; Merkle, Daniel

    2013-01-01

    The analysis of the structure of chemical reaction networks is crucial for a better understanding of chemical processes. Such networks are well described as hypergraphs. However, due to the available methods, analyses regarding network properties are typically made on standard graphs derived from...... the full hypergraph description, e.g. on the so-called species and reaction graphs. However, a reconstruction of the underlying hypergraph from these graphs is not necessarily unique. In this paper, we address the problem of reconstructing a hypergraph from its species and reaction graph and show NP...

  12. Phylogenetic reconstruction methods: an overview.

    Science.gov (United States)

    De Bruyn, Alexandre; Martin, Darren P; Lefeuvre, Pierre

    2014-01-01

    Initially designed to infer evolutionary relationships based on morphological and physiological characters, phylogenetic reconstruction methods have greatly benefited from recent developments in molecular biology and sequencing technologies with a number of powerful methods having been developed specifically to infer phylogenies from macromolecular data. This chapter, while presenting an overview of basic concepts and methods used in phylogenetic reconstruction, is primarily intended as a simplified step-by-step guide to the construction of phylogenetic trees from nucleotide sequences using fairly up-to-date maximum likelihood methods implemented in freely available computer programs. While the analysis of chloroplast sequences from various Vanilla species is used as an illustrative example, the techniques covered here are relevant to the comparative analysis of homologous sequences datasets sampled from any group of organisms.

  13. Reconstruction of neutron spectra through neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2003-01-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  14. A neural network image reconstruction technique for electrical impedance tomography

    International Nuclear Information System (INIS)

    Adler, A.; Guardo, R.

    1994-01-01

    Reconstruction of Images in Electrical Impedance Tomography requires the solution of a nonlinear inverse problem on noisy data. This problem is typically ill-conditioned and requires either simplifying assumptions or regularization based on a priori knowledge. This paper presents a reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem. This inverse is adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training. Results show good conductivity reconstruction where measurement SNR is similar to the training conditions. The advantages of this method are its conceptual simplicity and ease of implementation, and the ability to control the compromise between the noise performance and resolution of the image reconstruction

  15. Enhanced reconstruction of weighted networks from strengths and degrees

    International Nuclear Information System (INIS)

    Mastrandrea, Rossana; Fagiolo, Giorgio; Squartini, Tiziano; Garlaschelli, Diego

    2014-01-01

    Network topology plays a key role in many phenomena, from the spreading of diseases to that of financial crises. Whenever the whole structure of a network is unknown, one must resort to reconstruction methods that identify the least biased ensemble of networks consistent with the partial information available. A challenging case, frequently encountered due to privacy issues in the analysis of interbank flows and Big Data, is when there is only local (node-specific) aggregate information available. For binary networks, the relevant ensemble is one where the degree (number of links) of each node is constrained to its observed value. However, for weighted networks the problem is much more complicated. While the naïve approach prescribes to constrain the strengths (total link weights) of all nodes, recent counter-intuitive results suggest that in weighted networks the degrees are often more informative than the strengths. This implies that the reconstruction of weighted networks would be significantly enhanced by the specification of both strengths and degrees, a computationally hard and bias-prone procedure. Here we solve this problem by introducing an analytical and unbiased maximum-entropy method that works in the shortest possible time and does not require the explicit generation of reconstructed samples. We consider several real-world examples and show that, while the strengths alone give poor results, the additional knowledge of the degrees yields accurately reconstructed networks. Information-theoretic criteria rigorously confirm that the degree sequence, as soon as it is non-trivial, is irreducible to the strength sequence. Our results have strong implications for the analysis of motifs and communities and whenever the reconstructed ensemble is required as a null model to detect higher-order patterns

  16. Magnetic flux reconstruction methods for shaped tokamaks

    International Nuclear Information System (INIS)

    Tsui, Chi-Wa.

    1993-12-01

    The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising

  17. Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions

    DEFF Research Database (Denmark)

    Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.

    2015-01-01

    , the method we report represents a powerful tool to identify inconsistencies in large-scale metabolic networks. AVAILABILITY AND IMPLEMENTATION: The method is available as source code on http://users.minet.uni-jena.de/ approximately m3kach/ASBIG/ASBIG.zip. CONTACT: christoph.kaleta@uni-jena.de SUPPLEMENTARY...... by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass...... components in a metabolic model. These autocatalytic sets are well-conserved across all domains of life, and their identification in specific genome-scale reconstructions allows us to draw conclusions about potential inconsistencies in these models. The method is capable of detecting inconsistencies, which...

  18. Reconstruction of coupling architecture of neural field networks from vector time series

    Science.gov (United States)

    Sysoev, Ilya V.; Ponomarenko, Vladimir I.; Pikovsky, Arkady

    2018-04-01

    We propose a method of reconstruction of the network coupling matrix for a basic voltage-model of the neural field dynamics. Assuming that the multivariate time series of observations from all nodes are available, we describe a technique to find coupling constants which is unbiased in the limit of long observations. Furthermore, the method is generalized for reconstruction of networks with time-delayed coupling, including the reconstruction of unknown time delays. The approach is compared with other recently proposed techniques.

  19. Virtual resistive network and conductivity reconstruction with Faraday's law

    International Nuclear Information System (INIS)

    Lee, Min Gi; Ko, Min-Su; Kim, Yong-Jung

    2014-01-01

    A network-based conductivity reconstruction method is introduced using the third Maxwell equation, or Faraday's law, for a static case. The usual choice in electrical impedance tomography is the divergence-free equation for the electrical current density. However, if the electrical current density is given, the curl-free equation for the electrical field gives a direct relation between the current and the conductivity and this relation is used in this paper. Mimetic discretization is applied to the equation, which gives the virtual resistive network system. Properties of the numerical schemes introduced are investigated and their advantages over other conductivity reconstruction methods are discussed. Numerically simulated results, with an analysis of noise propagation, are presented. (paper)

  20. MR fingerprinting Deep RecOnstruction NEtwork (DRONE).

    Science.gov (United States)

    Cohen, Ouri; Zhu, Bo; Rosen, Matthew S

    2018-09-01

    Demonstrate a novel fast method for reconstruction of multi-dimensional MR fingerprinting (MRF) data using deep learning methods. A neural network (NN) is defined using the TensorFlow framework and trained on simulated MRF data computed with the extended phase graph formalism. The NN reconstruction accuracy for noiseless and noisy data is compared to conventional MRF template matching as a function of training data size and is quantified in simulated numerical brain phantom data and International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom data measured on 1.5T and 3T scanners with an optimized MRF EPI and MRF fast imaging with steady state precession (FISP) sequences with spiral readout. The utility of the method is demonstrated in a healthy subject in vivo at 1.5T. Network training required 10 to 74 minutes; once trained, data reconstruction required approximately 10 ms for the MRF EPI and 76 ms for the MRF FISP sequence. Reconstruction of simulated, noiseless brain data using the NN resulted in a RMS error (RMSE) of 2.6 ms for T 1 and 1.9 ms for T 2 . The reconstruction error in the presence of noise was less than 10% for both T 1 and T 2 for SNR greater than 25 dB. Phantom measurements yielded good agreement (R 2  = 0.99/0.99 for MRF EPI T 1 /T 2 and 0.94/0.98 for MRF FISP T 1 /T 2 ) between the T 1 and T 2 estimated by the NN and reference values from the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology phantom. Reconstruction of MRF data with a NN is accurate, 300- to 5000-fold faster, and more robust to noise and dictionary undersampling than conventional MRF dictionary-matching. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Image reconstruction using Monte Carlo simulation and artificial neural networks

    International Nuclear Information System (INIS)

    Emert, F.; Missimner, J.; Blass, W.; Rodriguez, A.

    1997-01-01

    PET data sets are subject to two types of distortions during acquisition: the imperfect response of the scanner and attenuation and scattering in the active distribution. In addition, the reconstruction of voxel images from the line projections composing a data set can introduce artifacts. Monte Carlo simulation provides a means for modeling the distortions and artificial neural networks a method for correcting for them as well as minimizing artifacts. (author) figs., tab., refs

  2. Method for position emission mammography image reconstruction

    Science.gov (United States)

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  3. Reconstructing Causal Biological Networks through Active Learning.

    Directory of Open Access Journals (Sweden)

    Hyunghoon Cho

    Full Text Available Reverse-engineering of biological networks is a central problem in systems biology. The use of intervention data, such as gene knockouts or knockdowns, is typically used for teasing apart causal relationships among genes. Under time or resource constraints, one needs to carefully choose which intervention experiments to carry out. Previous approaches for selecting most informative interventions have largely been focused on discrete Bayesian networks. However, continuous Bayesian networks are of great practical interest, especially in the study of complex biological systems and their quantitative properties. In this work, we present an efficient, information-theoretic active learning algorithm for Gaussian Bayesian networks (GBNs, which serve as important models for gene regulatory networks. In addition to providing linear-algebraic insights unique to GBNs, leading to significant runtime improvements, we demonstrate the effectiveness of our method on data simulated with GBNs and the DREAM4 network inference challenge data sets. Our method generally leads to faster recovery of underlying network structure and faster convergence to final distribution of confidence scores over candidate graph structures using the full data, in comparison to random selection of intervention experiments.

  4. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    porosity in chalk in the form of foraminifer shells. A hybrid reconstruction technique that initializes the simulated annealing reconstruction with input generated using the Gaussian random field method has also been introduced. The technique was found to accelerate significantly the rate of convergence of the simulated annealing method. This finding is important because the main advantage of the simulated annealing method, namely its ability to impose a variety of reconstruction constraints, is usually compromised by its very slow rate of convergence. Absolute permeability, formation factor and mercury-air capillary pressure are computed from simple network models. The input parameters for the network models were extracted from a reconstructed chalk sample. The computed permeability, formation factor and mercury-air capillary pressure correspond well with the experimental data. The predictive power of a network model for chalk is further extended through incorporating important pore-level displacement phenomena and realistic description of pore space geometry and topology. Limited results show that the model may be used to compute absolute and relative permeabilities, capillary pressure, formation factor, resistivity index and saturation exponent. The above findings suggest that the network modeling technique may be used for prediction of petrophysical and reservoir engineering properties of chalk. Further works are necessary and an outline is given with considerable details. Two 2D, one 3D and a dual-porosity fractured reservoir models have been developed and an imbibition process involving water displacing oil is simulated at various injection rates and with different oil-to-water viscosity ratios using four widely used conventional up scaling techniques. The up scaling techniques are the Kyte and Berry, Pore Volume Weighted, Weighted Relative Permeability, and Stone. The results suggest that up scaling of fractured reservoirs may be possible using the conventional

  5. The Reconstruction and Analysis of Gene Regulatory Networks.

    Science.gov (United States)

    Zheng, Guangyong; Huang, Tao

    2018-01-01

    In post-genomic era, an important task is to explore the function of individual biological molecules (i.e., gene, noncoding RNA, protein, metabolite) and their organization in living cells. For this end, gene regulatory networks (GRNs) are constructed to show relationship between biological molecules, in which the vertices of network denote biological molecules and the edges of network present connection between nodes (Strogatz, Nature 410:268-276, 2001; Bray, Science 301:1864-1865, 2003). Biologists can understand not only the function of biological molecules but also the organization of components of living cells through interpreting the GRNs, since a gene regulatory network is a comprehensively physiological map of living cells and reflects influence of genetic and epigenetic factors (Strogatz, Nature 410:268-276, 2001; Bray, Science 301:1864-1865, 2003). In this paper, we will review the inference methods of GRN reconstruction and analysis approaches of network structure. As a powerful tool for studying complex diseases and biological processes, the applications of the network method in pathway analysis and disease gene identification will be introduced.

  6. Neural network algorithm for image reconstruction using the grid friendly projections

    International Nuclear Information System (INIS)

    Cierniak, R.

    2011-01-01

    Full text: The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the 'grid-friendly' angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality. (author)

  7. A practical algorithm for reconstructing level-1 phylogenetic networks

    NARCIS (Netherlands)

    Huber, K.T.; Iersel, van L.J.J.; Kelk, S.M.; Suchecki, R.

    2011-01-01

    Recently, much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here, we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks-a type of network

  8. Apparatus and method for reconstructing data

    International Nuclear Information System (INIS)

    1981-01-01

    A method and apparatus is described for constructing a two-dimensional picture of an object slice from linear projections of radiation not absorbed or scattered by the object, using convolution methods of data reconstruction, useful in the fields of medical radiology, microscopy, and non-destructive testing. (U.K.)

  9. Genome-scale reconstruction of the Saccharomyces cerevisiae metabolic network

    DEFF Research Database (Denmark)

    Förster, Jochen; Famili, I.; Fu, P.

    2003-01-01

    The metabolic network in the yeast Saccharomyces cerevisiae was reconstructed using currently available genomic, biochemical, and physiological information. The metabolic reactions were compartmentalized between the cytosol and the mitochondria, and transport steps between the compartments...

  10. Reconstructing the Hopfield network as an inverse Ising problem

    International Nuclear Information System (INIS)

    Huang Haiping

    2010-01-01

    We test four fast mean-field-type algorithms on Hopfield networks as an inverse Ising problem. The equilibrium behavior of Hopfield networks is simulated through Glauber dynamics. In the low-temperature regime, the simulated annealing technique is adopted. Although performances of these network reconstruction algorithms on the simulated network of spiking neurons are extensively studied recently, the analysis of Hopfield networks is lacking so far. For the Hopfield network, we found that, in the retrieval phase favored when the network wants to memory one of stored patterns, all the reconstruction algorithms fail to extract interactions within a desired accuracy, and the same failure occurs in the spin-glass phase where spurious minima show up, while in the paramagnetic phase, albeit unfavored during the retrieval dynamics, the algorithms work well to reconstruct the network itself. This implies that, as an inverse problem, the paramagnetic phase is conversely useful for reconstructing the network while the retrieval phase loses all the information about interactions in the network except for the case where only one pattern is stored. The performances of algorithms are studied with respect to the system size, memory load, and temperature; sample-to-sample fluctuations are also considered.

  11. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  12. Image reconstruction methods in positron tomography

    International Nuclear Information System (INIS)

    Townsend, D.W.; Defrise, M.

    1993-01-01

    In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-ray but also for studies which explore the functional status of the body using positron-emitting radioisotopes. This report reviews the historical and physical basis of medical imaging techniques using positron-emitting radioisotopes. Mathematical methods which enable three-dimensional distributions of radioisotopes to be reconstructed from projection data (sinograms) acquired by detectors suitably positioned around the patient are discussed. The extension of conventional two-dimensional tomographic reconstruction algorithms to fully three-dimensional reconstruction is described in detail. (orig.)

  13. New method for initial density reconstruction

    Science.gov (United States)

    Shi, Yanlong; Cautun, Marius; Li, Baojiu

    2018-01-01

    A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms that were very recently put forward in the literature, with the reconstructed density field over ˜80 % (50%) correlated with the initial condition at k ≲0.6 h /Mpc (1.0 h /Mpc ). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction.

  14. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  15. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    Science.gov (United States)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  16. On an image reconstruction method for ECT

    Science.gov (United States)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  17. Plasmid Flux in Escherichia coli ST131 Sublineages, Analyzed by Plasmid Constellation Network (PLACNET), a New Method for Plasmid Reconstruction from Whole Genome Sequences

    Science.gov (United States)

    Garcillán-Barcia, M. Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M.; de la Cruz, Fernando

    2014-01-01

    Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ–proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages. PMID:25522143

  18. Plasmid flux in Escherichia coli ST131 sublineages, analyzed by plasmid constellation network (PLACNET), a new method for plasmid reconstruction from whole genome sequences.

    Science.gov (United States)

    Lanza, Val F; de Toro, María; Garcillán-Barcia, M Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M; de la Cruz, Fernando

    2014-12-01

    Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.

  19. Plasmid flux in Escherichia coli ST131 sublineages, analyzed by plasmid constellation network (PLACNET, a new method for plasmid reconstruction from whole genome sequences.

    Directory of Open Access Journals (Sweden)

    Val F Lanza

    2014-12-01

    Full Text Available Bacterial whole genome sequence (WGS methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage, comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC, comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.

  20. Reconstruction of certain phylogenetic networks from their tree-average distances.

    Science.gov (United States)

    Willson, Stephen J

    2013-10-01

    Trees are commonly utilized to describe the evolutionary history of a collection of biological species, in which case the trees are called phylogenetic trees. Often these are reconstructed from data by making use of distances between extant species corresponding to the leaves of the tree. Because of increased recognition of the possibility of hybridization events, more attention is being given to the use of phylogenetic networks that are not necessarily trees. This paper describes the reconstruction of certain such networks from the tree-average distances between the leaves. For a certain class of phylogenetic networks, a polynomial-time method is presented to reconstruct the network from the tree-average distances. The method is proved to work if there is a single reticulation cycle.

  1. Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Sinisa Pajevic

    2009-01-01

    Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.

  2. Strategy on energy saving reconstruction of distribution networks based on life cycle cost

    Science.gov (United States)

    Chen, Xiaofei; Qiu, Zejing; Xu, Zhaoyang; Xiao, Chupeng

    2017-08-01

    Because the actual distribution network reconstruction project funds are often limited, the cost-benefit model and the decision-making method are crucial for distribution network energy saving reconstruction project. From the perspective of life cycle cost (LCC), firstly the research life cycle is determined for the energy saving reconstruction of distribution networks with multi-devices. Then, a new life cycle cost-benefit model for energy-saving reconstruction of distribution network is developed, in which the modification schemes include distribution transformers replacement, lines replacement and reactive power compensation. In the operation loss cost and maintenance cost area, the operation cost model considering the influence of load season characteristics and the maintenance cost segmental model of transformers are proposed. Finally, aiming at the highest energy saving profit per LCC, a decision-making method is developed while considering financial and technical constraints as well. The model and method are applied to a real distribution network reconstruction, and the results prove that the model and method are effective.

  3. Apparatus and method for reconstructing data

    International Nuclear Information System (INIS)

    Pavkovich, J.M.

    1977-01-01

    The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)

  4. Statistical inference approach to structural reconstruction of complex networks from binary time series

    Science.gov (United States)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  5. Perturbation methods for power and reactivity reconstruction

    International Nuclear Information System (INIS)

    Palmiotti, G.; Salvatores, M.; Estiot, J.C.; Broccoli, U.; Bruna, G.; Gomit, J.M.

    1987-01-01

    This paper deals with recent developments and applications in perturbation methods. Two types of methods are used. The first one is an explicit method, which allows the explicit reconstruction of a perturbed flux using a linear combination of a library of functions. In our application, these functions are the harmonics (i.e. the high order eigenfunctions of the system). The second type is based on the Generalized Perturbation Theory GPT and needs the calculation of an importance function for each integral parameter of interest. Recent developments of a particularly useful high order formulation allows to obtain satisfactory results also for very large perturbations

  6. Supersampling and Network Reconstruction of Urban Mobility.

    Directory of Open Access Journals (Sweden)

    Oleguer Sagarra

    Full Text Available Understanding human mobility is of vital importance for urban planning, epidemiology, and many other fields that draw policies from the activities of humans in space. Despite the recent availability of large-scale data sets of GPS traces or mobile phone records capturing human mobility, typically only a subsample of the population of interest is represented, giving a possibly incomplete picture of the entire system under study. Methods to reliably extract mobility information from such reduced data and to assess their sampling biases are lacking. To that end, we analyzed a data set of millions of taxi movements in New York City. We first show that, once they are appropriately transformed, mobility patterns are highly stable over long time scales. Based on this observation, we develop a supersampling methodology to reliably extrapolate mobility records from a reduced sample based on an entropy maximization procedure, and we propose a number of network-based metrics to assess the accuracy of the predicted vehicle flows. Our approach provides a well founded way to exploit temporal patterns to save effort in recording mobility data, and opens the possibility to scale up data from limited records when information on the full system is required.

  7. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    Science.gov (United States)

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  8. Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2014-01-01

    Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.

  9. Reconstructing transcriptional regulatory networks through genomics data

    OpenAIRE

    Sun, Ning; Zhao, Hongyu

    2009-01-01

    One central problem in biology is to understand how gene expression is regulated under different conditions. Microarray gene expression data and other high throughput data have made it possible to dissect transcriptional regulatory networks at the genomics level. Owing to the very large number of genes that need to be studied, the relatively small number of data sets available, the noise in the data and the different natures of the distinct data types, network inference presents great challen...

  10. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  11. A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone

    Science.gov (United States)

    Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2010-02-01

    esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm

  12. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  13. Orthotropic conductivity reconstruction with virtual-resistive network and Faraday's law

    KAUST Repository

    Lee, Min-Gi

    2015-06-01

    We obtain the existence and the uniqueness at the same time in the reconstruction of orthotropic conductivity in two-space dimensions by using two sets of internal current densities and boundary conductivity. The curl-free equation of Faraday\\'s law is taken instead of the elliptic equation in a divergence form that is typically used in electrical impedance tomography. A reconstruction method based on layered bricks-type virtual-resistive network is developed to reconstruct orthotropic conductivity with up to 40% multiplicative noise.

  14. Reconstruction of financial networks for robust estimation of systemic risk

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-01-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks

  15. Reconstruction of financial networks for robust estimation of systemic risk

    Science.gov (United States)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-03-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.

  16. Research of ART method in CT image reconstruction

    International Nuclear Information System (INIS)

    Li Zhipeng; Cong Peng; Wu Haifeng

    2005-01-01

    This paper studied Algebraic Reconstruction Technique (ART) in CT image reconstruction. Discussed the ray number influence on image quality. And the adopting of smooth method got high quality CT image. (authors)

  17. Method of reconstructing a moving pulse

    Energy Technology Data Exchange (ETDEWEB)

    Howard, S J; Horton, R D; Hwang, D Q; Evans, R W; Brockington, S J; Johnson, J [UC Davis Department of Applied Science, Livermore, CA, 94551 (United States)

    2007-11-15

    We present a method of analyzing a set of N time signals f{sub i}(t) that consist of local measurements of the same physical observable taken at N sequential locations Z{sub i} along the length of an experimental device. The result is an algorithm for reconstructing an approximation F(z,t) of the field f(z,t) in the inaccessible regions between the points of measurement. We also explore the conditions needed for this approximation to hold, and test the algorithm under a variety of conditions. We apply this method to analyze the magnetic field measurements taken on the Compact Toroid Injection eXperiment (CTIX) plasma accelerator; providing a direct means of visualizing experimental data, quantifying global properties, and benchmarking simulation.

  18. Reconstruction of gastric slow wave from finger photoplethysmographic signal using radial basis function neural network.

    Science.gov (United States)

    Mohamed Yacin, S; Srinivasa Chakravarthy, V; Manivannan, M

    2011-11-01

    Extraction of extra-cardiac information from photoplethysmography (PPG) signal is a challenging research problem with significant clinical applications. In this study, radial basis function neural network (RBFNN) is used to reconstruct the gastric myoelectric activity (GMA) slow wave from finger PPG signal. Finger PPG and GMA (measured using Electrogastrogram, EGG) signals were acquired simultaneously at the sampling rate of 100 Hz from ten healthy subjects. Discrete wavelet transform (DWT) was used to extract slow wave (0-0.1953 Hz) component from the finger PPG signal; this slow wave PPG was used to reconstruct EGG. A RBFNN is trained on signals obtained from six subjects in both fasting and postprandial conditions. The trained network is tested on data obtained from the remaining four subjects. In the earlier study, we have shown the presence of GMA information in finger PPG signal using DWT and cross-correlation method. In this study, we explicitly reconstruct gastric slow wave from finger PPG signal by the proposed RBFNN-based method. It was found that the network-reconstructed slow wave provided significantly higher (P wave than the correlation obtained (≈0.7) between the PPG slow wave from DWT and the EEG slow wave. Our results showed that a simple finger PPG signal can be used to reconstruct gastric slow wave using RBFNN method.

  19. Track and vertex reconstruction: From classical to adaptive methods

    International Nuclear Information System (INIS)

    Strandlie, Are; Fruehwirth, Rudolf

    2010-01-01

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  20. HAWC Energy Reconstruction via Neural Network

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  1. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  2. Image-reconstruction methods in positron tomography

    CERN Document Server

    Townsend, David W; CERN. Geneva

    1993-01-01

    Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...

  3. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    International Nuclear Information System (INIS)

    Wang, Y.

    2013-01-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  4. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  5. Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks

    Science.gov (United States)

    Flekova, L.; Schott, M.

    2017-10-01

    Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting these new technologies for their detector upgrade programs in the coming years. When MPGDs are utilized for triggering purposes, the measured signals need to be precisely reconstructed within less than 200 ns, which can be achieved by the usage of FPGAs. In this work, we present a novel approach to identify reconstructed signals, their timing and the corresponding spatial position on the detector. In particular, we study the effect of noise and dead readout strips on the reconstruction performance. Our approach leverages the potential of convolutional neural network (CNNs), which have recently manifested an outstanding performance in a range of modeling tasks. The proposed neural network architecture of our CNN is designed simply enough, so that it can be modeled directly by an FPGA and thus provide precise information on reconstructed signals already in trigger level.

  6. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Science.gov (United States)

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  7. Speech reconstruction using a deep partially supervised neural network.

    Science.gov (United States)

    McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R

    2017-08-01

    Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.

  8. Applying Bayesian neural networks to event reconstruction in reactor neutrino experiments

    International Nuclear Information System (INIS)

    Xu Ye; Xu Weiwei; Meng Yixiong; Zhu Kaien; Xu Wei

    2008-01-01

    A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The electron samples from the Monte-Carlo simulation of the toy detector have been reconstructed by the method of Bayesian neural networks (BNNs) and the standard algorithm, a maximum likelihood method (MLD), respectively. The result of the event reconstruction using BNN has been compared with the one using MLD. Compared to MLD, the uncertainties of the electron vertex are not improved, but the energy resolutions are significantly improved using BNN. And the improvement is more obvious for the high energy electrons than the low energy ones

  9. [Development and current situation of reconstruction methods following total sacrectomy].

    Science.gov (United States)

    Huang, Siyi; Ji, Tao; Guo, Wei

    2018-05-01

    To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.

  10. Signal reconstruction in wireless sensor networks based on a cubature Kalman particle filter

    International Nuclear Information System (INIS)

    Huang Jin-Wang; Feng Jiu-Chao

    2014-01-01

    For solving the issues of the signal reconstruction of nonlinear non-Gaussian signals in wireless sensor networks (WSNs), a new signal reconstruction algorithm based on a cubature Kalman particle filter (CKPF) is proposed in this paper. We model the reconstruction signal first and then use the CKPF to estimate the signal. The CKPF uses a cubature Kalman filter (CKF) to generate the importance proposal distribution of the particle filter and integrates the latest observation, which can approximate the true posterior distribution better. It can improve the estimation accuracy. CKPF uses fewer cubature points than the unscented Kalman particle filter (UKPF) and has less computational overheads. Meanwhile, CKPF uses the square root of the error covariance for iterating and is more stable and accurate than the UKPF counterpart. Simulation results show that the algorithm can reconstruct the observed signals quickly and effectively, at the same time consuming less computational time and with more accuracy than the method based on UKPF. (general)

  11. Gene expression network reconstruction by convex feature selection when incorporating genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Benjamin A Logsdon

    Full Text Available Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL, which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1, and genes involved in endocytosis (RCY1, the spindle checkpoint (BUB2, sulfonate catabolism (JLP1, and cell-cell communication (PRM7. Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data.

  12. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    DEFF Research Database (Denmark)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell

    2007-01-01

    in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...

  13. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  14. Evaluation of proxy-based millennial reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Terry C.K.; Tsao, Min [University of Victoria, Department of Mathematics and Statistics, Victoria, BC (Canada); Zwiers, Francis W. [Environment Canada, Climate Research Division, Toronto, ON (Canada)

    2008-08-15

    A range of existing statistical approaches for reconstructing historical temperature variations from proxy data are compared using both climate model data and real-world paleoclimate proxy data. We also propose a new method for reconstruction that is based on a state-space time series model and Kalman filter algorithm. The state-space modelling approach and the recently developed RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing surface air temperature variability on decadal time scales. An advantage of the new method is that it can incorporate additional, non-temperature, information into the reconstruction, such as the estimated response to external forcing, thereby permitting a simultaneous reconstruction and detection analysis as well as future projection. An application of these extensions is also demonstrated in the paper. (orig.)

  15. Reconstruction of a ring applicator using CT imaging: impact of the reconstruction method and applicator orientation

    International Nuclear Information System (INIS)

    Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell; Knutsen, Bjoern Helge; Roeislien, Jo; Olsen, Dag Rune

    2007-01-01

    The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy

  16. Integrated Approach to Reconstruction of Microbial Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rodionov, Dmitry A [Sanford-Burnham Medical Research Institute; Novichkov, Pavel S [Lawrence Berkeley National Laboratory

    2013-11-04

    This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated in RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.

  17. Software Architecture Reconstruction Method, a Survey

    OpenAIRE

    Zainab Nayyar; Nazish Rafique

    2014-01-01

    Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...

  18. Reconstruction of Daily Sea Surface Temperature Based on Radial Basis Function Networks

    Directory of Open Access Journals (Sweden)

    Zhihong Liao

    2017-11-01

    Full Text Available A radial basis function network (RBFN method is proposed to reconstruct daily Sea surface temperatures (SSTs with limited SST samples. For the purpose of evaluating the SSTs using this method, non-biased SST samples in the Pacific Ocean (10°N–30°N, 115°E–135°E are selected when the tropical storm Hagibis arrived in June 2014, and these SST samples are obtained from the Reynolds optimum interpolation (OI v2 daily 0.25° SST (OISST products according to the distribution of AVHRR L2p SST and in-situ SST data. Furthermore, an improved nearest neighbor cluster (INNC algorithm is designed to search for the optimal hidden knots for RBFNs from both the SST samples and the background fields. Then, the reconstructed SSTs from the RBFN method are compared with the results from the OI method. The statistical results show that the RBFN method has a better performance of reconstructing SST than the OI method in the study, and that the average RMSE is 0.48 °C for the RBFN method, which is quite smaller than the value of 0.69 °C for the OI method. Additionally, the RBFN methods with different basis functions and clustering algorithms are tested, and we discover that the INNC algorithm with multi-quadric function is quite suitable for the RBFN method to reconstruct SSTs when the SST samples are sparsely distributed.

  19. Communication Network Analysis Methods.

    Science.gov (United States)

    Farace, Richard V.; Mabee, Timothy

    This paper reviews a variety of analytic procedures that can be applied to network data, discussing the assumptions and usefulness of each procedure when applied to the complexity of human communication. Special attention is paid to the network properties measured or implied by each procedure. Factor analysis and multidimensional scaling are among…

  20. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    Science.gov (United States)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  1. SCENERY: a web application for (causal) network reconstruction from cytometry data

    KAUST Repository

    Papoutsoglou, Georgios

    2017-05-08

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/.

  2. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  3. P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.

    Science.gov (United States)

    Young-Rae Cho; Yanan Xin; Speegle, Greg

    2015-01-01

    Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.

  4. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  5. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  6. Limb reconstruction with the Ilizarov method

    NARCIS (Netherlands)

    Oostenbroek, H.J.

    2014-01-01

    In chapter 1, the background and origins of this study are explained. The aims of the study are defined. In chapter 2, an analysis of the complications rate of limb reconstruction in a cohort of 37 consecutive growing children was done. Several patient and deformity factors were investigated by

  7. Occluded object reconstruction for first responders with augmented reality glasses using conditional generative adversarial networks

    OpenAIRE

    Yun, Kyongsik; Lu, Thomas; Chow, Edward

    2018-01-01

    Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts....

  8. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  9. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  10. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  11. Geometric reconstruction methods for electron tomography

    International Nuclear Information System (INIS)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand

  12. Assessing the Accuracy of Ancestral Protein Reconstruction Methods

    OpenAIRE

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-01-01

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolu...

  13. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    Science.gov (United States)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  14. DENSE MATCHING COMPARISON BETWEEN CENSUS AND A CONVOLUTIONAL NEURAL NETWORK ALGORITHM FOR PLANT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Y. Xia

    2018-05-01

    Full Text Available 3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  15. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  16. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two

  17. A comparison of ancestral state reconstruction methods for quantitative characters.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Didier, Gilles

    2016-09-07

    Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling

    NARCIS (Netherlands)

    Liu, D.

    2013-01-01

    The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our

  19. Reconstruction of three-dimensional porous media using generative adversarial neural networks

    Science.gov (United States)

    Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.

    2017-10-01

    To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.

  20. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  1. Alternative method for reconstruction of antihydrogen annihilation vertices

    CERN Document Server

    Amole, C; Andresen , G B; Baquero-Ruiz, M; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jonsell, S; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki,Y

    2012-01-01

    The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.

  2. Alternative method for reconstruction of antihydrogen annihilation vertices

    Energy Technology Data Exchange (ETDEWEB)

    Amole, C., E-mail: chanpreet.amole@cern.ch [York University, Department of Physics and Astronomy (Canada); Ashkezari, M. D. [Simon Fraser University, Department of Physics (Canada); Andresen, G. B. [Aarhus University, Department of Physics and Astronomy (Denmark); Baquero-Ruiz, M. [University of California, Department of Physics (United States); Bertsche, W. [Swansea University, Department of Physics (United Kingdom); Bowe, P. D. [Aarhus University, Department of Physics and Astronomy (Denmark); Butler, E. [CERN, Physics Department (Switzerland); Cesar, C. L. [Universidade Federal do Rio de Janeiro, Instituto de Fisica (Brazil); Chapman, S. [University of California, Department of Physics (United States); Charlton, M.; Deller, A.; Eriksson, S. [Swansea University, Department of Physics (United Kingdom); Fajans, J. [University of California, Department of Physics (United States); Friesen, T.; Fujiwara, M. C. [University of Calgary, Department of Physics and Astronomy (Canada); Gill, D. R. [TRIUMF (Canada); Gutierrez, A. [University of British Columbia, Department of Physics and Astronomy (Canada); Hangst, J. S. [Aarhus University, Department of Physics and Astronomy (Denmark); Hardy, W. N. [University of British Columbia, Department of Physics and Astronomy (Canada); Hayano, R. S. [University of Tokyo, Department of Physics (Japan); Collaboration: ALPHA Collaboration; and others

    2012-12-15

    The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.

  3. A dynamic evolutionary clustering perspective: Community detection in signed networks by reconstructing neighbor sets

    Science.gov (United States)

    Chen, Jianrui; Wang, Hua; Wang, Lina; Liu, Weiwei

    2016-04-01

    Community detection in social networks has been intensively studied in recent years. In this paper, a novel similarity measurement is defined according to social balance theory for signed networks. Inter-community positive links are found and deleted due to their low similarity. The positive neighbor sets are reconstructed by this method. Then, differential equations are proposed to imitate the constantly changing states of nodes. Each node will update its state based on the difference between its state and average state of its positive neighbors. Nodes in the same community will evolve together with time and nodes in the different communities will evolve far away. Communities are detected ultimately when states of nodes are stable. Experiments on real world and synthetic networks are implemented to verify detection performance. The thorough comparisons demonstrate the presented method is more efficient than two acknowledged better algorithms.

  4. Reconstruction methods for phase-contrast tomography

    Energy Technology Data Exchange (ETDEWEB)

    Raven, C.

    1997-02-01

    Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.

  5. Reconstruction and analysis of nutrient-induced phosphorylation networks in Arabidopsis thaliana.

    Directory of Open Access Journals (Sweden)

    Guangyou eDuan

    2013-12-01

    Full Text Available Elucidating the dynamics of molecular processes in living organisms in response to external perturbations is a central goal in modern systems biology. We investigated the dynamics of protein phosphorylation events in Arabidopsis thaliana exposed to changing nutrient conditions. Phosphopeptide expression levels were detected at five consecutive time points over a time interval of 30 minutes after nutrient resupply following prior starvation. The three tested inorganic, ionic nutrients NH4+, NO3-, PO43- elicited similar phosphosignaling responses that were distinguishable from those invoked by the sugars mannitol, sucrose. When embedded in the protein-protein interaction network of Arabidopsis thaliana, phosphoproteins were found to exhibit a higher degree compared to average proteins. Based on the time-series data, we reconstructed a network of regulatory interactions mediated by phosphorylation. The performance of different network inference methods was evaluated by the observed likelihood of physical interactions within and across different subcellular compartments and based on gene ontology semantic similarity. The dynamic phosphorylation network was then reconstructed using a Pearson correlation method with added directionality based on partial variance differences. The topology of the inferred integrated network corresponds to an information dissemination architecture, in which the phosphorylation signal is passed on to an increasing number of phosphoproteins stratified into an initiation, processing, and effector layer. Specific phosphorylation peptide motifs associated with the distinct layers were identified indicating the action of layer-specific kinases. Despite the limited temporal resolution, combined with information on subcellular location, the available time-series data proved useful for reconstructing the dynamics of the molecular signaling cascade in response to nutrient stress conditions in the plant Arabidopsis thaliana.

  6. A consensus yeast metabolic network reconstruction obtained from a community approach to systems biology

    NARCIS (Netherlands)

    Herrgård, Markus J.; Swainston, Neil; Dobson, Paul; Dunn, Warwick B.; Arga, K. Yalçin; Arvas, Mikko; Blüthgen, Nils; Borger, Simon; Costenoble, Roeland; Heinemann, Matthias; Hucka, Michael; Novère, Nicolas Le; Li, Peter; Liebermeister, Wolfram; Mo, Monica L.; Oliveira, Ana Paula; Petranovic, Dina; Pettifer, Stephen; Simeonidis, Evangelos; Smallbone, Kieran; Spasić, Irena; Weichart, Dieter; Brent, Roger; Broomhead, David S.; Westerhoff, Hans V.; Kırdar, Betül; Penttilä, Merja; Klipp, Edda; Palsson, Bernhard Ø.; Sauer, Uwe; Oliver, Stephen G.; Mendes, Pedro; Nielsen, Jens; Kell, Douglas B.

    2008-01-01

    Genomic data allow the large-scale manual or semi-automated assembly of metabolic network reconstructions, which provide highly curated organism-specific knowledge bases. Although several genome-scale network reconstructions describe Saccharomyces cerevisiae metabolism, they differ in scope and

  7. Reconstruction of networks from one-step data by matching positions

    Science.gov (United States)

    Wu, Jianshe; Dang, Ni; Jiao, Yang

    2018-05-01

    It is a challenge in estimating the topology of a network from short time series data. In this paper, matching positions is developed to reconstruct the topology of a network from only one-step data. We consider a general network model of coupled agents, in which the phase transformation of each node is determined by its neighbors. From the phase transformation information from one step to the next, the connections of the tail vertices are reconstructed firstly by the matching positions. Removing the already reconstructed vertices, and repeatedly reconstructing the connections of tail vertices, the topology of the entire network is reconstructed. For sparse scale-free networks with more than ten thousands nodes, we almost obtain the actual topology using only the one-step data in simulations.

  8. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  9. Filter-based reconstruction methods for tomography

    NARCIS (Netherlands)

    Pelt, D.M.

    2016-01-01

    In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally

  10. Comparison of Force Reconstruction Methods for a Lumped Mass Beam

    Directory of Open Access Journals (Sweden)

    Vesta I. Bateman

    1997-01-01

    Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.

  11. Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection.

    Science.gov (United States)

    Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne

    2005-04-15

    The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.

  12. Choosing the best ancestral character state reconstruction method.

    Science.gov (United States)

    Royer-Carenzi, Manuela; Pontarotti, Pierre; Didier, Gilles

    2013-03-01

    Despite its intrinsic difficulty, ancestral character state reconstruction is an essential tool for testing evolutionary hypothesis. Two major classes of approaches to this question can be distinguished: parsimony- or likelihood-based approaches. We focus here on the second class of methods, more specifically on approaches based on continuous-time Markov modeling of character evolution. Among them, we consider the most-likely-ancestor reconstruction, the posterior-probability reconstruction, the likelihood-ratio method, and the Bayesian approach. We discuss and compare the above-mentioned methods over several phylogenetic trees, adding the maximum-parsimony method performance in the comparison. Under the assumption that the character evolves according a continuous-time Markov process, we compute and compare the expectations of success of each method for a broad range of model parameter values. Moreover, we show how the knowledge of the evolution model parameters allows to compute upper bounds of reconstruction performances, which are provided as references. The results of all these reconstruction methods are quite close one to another, and the expectations of success are not so far from their theoretical upper bounds. But the performance ranking heavily depends on the topology of the studied tree, on the ancestral node that is to be inferred and on the parameter values. Consequently, we propose a protocol providing for each parameter value the best method in terms of expectation of success, with regard to the phylogenetic tree and the ancestral node to infer. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.

    2013-07-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  14. Assessing the accuracy of ancestral protein reconstruction methods.

    Directory of Open Access Journals (Sweden)

    Paul D Williams

    2006-06-01

    Full Text Available The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.

  15. Assessing the accuracy of ancestral protein reconstruction methods.

    Science.gov (United States)

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-06-23

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.

  16. Deep Learning Methods for Particle Reconstruction in the HGCal

    CERN Document Server

    Arzi, Ofir

    2017-01-01

    The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure \\ref{fig:cms})\\cite{Contardo:2020886}. It's goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.

  17. Deep Learning Methods for Particle Reconstruction in the HGCal

    CERN Document Server

    Arzi, Ofir

    2017-01-01

    The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure 1)[1]. It’s goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.

  18. Multiple network interface core apparatus and method

    Science.gov (United States)

    Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  19. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  20. Image reconstruction in computerized tomography using the convolution method

    International Nuclear Information System (INIS)

    Oliveira Rebelo, A.M. de.

    1984-03-01

    In the present work an algoritin was derived, using the analytical convolution method (filtered back-projection) for two-dimensional or three-dimensional image reconstruction in computerized tomography applied to non-destructive testing and to the medical use. This mathematical model is based on the analytical Fourier transform method for image reconstruction. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object under study of a colimated gamma ray beam has been determined for various positions and incidence angles (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function W ij which was used for simulated tests. Simulated tests using standard objects with attenuation coefficients in the range of 0,2 to 0,7 cm -1 were carried out using cell arrays of up to 25x25. One application was carried out in the medical area simulating image reconstruction of an arm phantom with attenuation coefficients in the range of 0,2 to 0,5 cm -1 using cell arrays of 41x41. The simulated results show that, in objects with a great number of interfaces and great variations of attenuation coefficients at these interfaces, a good reconstruction is obtained with the number of projections equal to the reconstruction matrix dimension. A good reconstruction is otherwise obtained with fewer projections. (author) [pt

  1. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  2. A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Oktay Büyükaşık

    2010-12-01

    Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31

  3. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  4. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  5. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  6. EnzDP: improved enzyme annotation for metabolic network reconstruction based on domain composition profiles.

    Science.gov (United States)

    Nguyen, Nam-Ninh; Srihari, Sriganesh; Leong, Hon Wai; Chong, Ket-Fah

    2015-10-01

    Determining the entire complement of enzymes and their enzymatic functions is a fundamental step for reconstructing the metabolic network of cells. High quality enzyme annotation helps in enhancing metabolic networks reconstructed from the genome, especially by reducing gaps and increasing the enzyme coverage. Currently, structure-based and network-based approaches can only cover a limited number of enzyme families, and the accuracy of homology-based approaches can be further improved. Bottom-up homology-based approach improves the coverage by rebuilding Hidden Markov Model (HMM) profiles for all known enzymes. However, its clustering procedure relies firmly on BLAST similarity score, ignoring protein domains/patterns, and is sensitive to changes in cut-off thresholds. Here, we use functional domain architecture to score the association between domain families and enzyme families (Domain-Enzyme Association Scoring, DEAS). The DEAS score is used to calculate the similarity between proteins, which is then used in clustering procedure, instead of using sequence similarity score. We improve the enzyme annotation protocol using a stringent classification procedure, and by choosing optimal threshold settings and checking for active sites. Our analysis shows that our stringent protocol EnzDP can cover up to 90% of enzyme families available in Swiss-Prot. It achieves a high accuracy of 94.5% based on five-fold cross-validation. EnzDP outperforms existing methods across several testing scenarios. Thus, EnzDP serves as a reliable automated tool for enzyme annotation and metabolic network reconstruction. Available at: www.comp.nus.edu.sg/~nguyennn/EnzDP .

  7. Network modelling methods for FMRI.

    Science.gov (United States)

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Probability Density Function Method for Observing Reconstructed Attractor Structure

    Institute of Scientific and Technical Information of China (English)

    陆宏伟; 陈亚珠; 卫青

    2004-01-01

    Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.

  9. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  10. Network Forensics Method Based on Evidence Graph and Vulnerability Reasoning

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2016-11-01

    Full Text Available As the Internet becomes larger in scale, more complex in structure and more diversified in traffic, the number of crimes that utilize computer technologies is also increasing at a phenomenal rate. To react to the increasing number of computer crimes, the field of computer and network forensics has emerged. The general purpose of network forensics is to find malicious users or activities by gathering and dissecting firm evidences about computer crimes, e.g., hacking. However, due to the large volume of Internet traffic, not all the traffic captured and analyzed is valuable for investigation or confirmation. After analyzing some existing network forensics methods to identify common shortcomings, we propose in this paper a new network forensics method that uses a combination of network vulnerability and network evidence graph. In our proposed method, we use vulnerability evidence and reasoning algorithm to reconstruct attack scenarios and then backtrack the network packets to find the original evidences. Our proposed method can reconstruct attack scenarios effectively and then identify multi-staged attacks through evidential reasoning. Results of experiments show that the evidence graph constructed using our method is more complete and credible while possessing the reasoning capability.

  11. Least Squares Methods for Equidistant Tree Reconstruction

    OpenAIRE

    Fahey, Conor; Hosten, Serkan; Krieger, Nathan; Timpe, Leslie

    2008-01-01

    UPGMA is a heuristic method identifying the least squares equidistant phylogenetic tree given empirical distance data among $n$ taxa. We study this classic algorithm using the geometry of the space of all equidistant trees with $n$ leaves, also known as the Bergman complex of the graphical matroid for the complete graph $K_n$. We show that UPGMA performs an orthogonal projection of the data onto a maximal cell of the Bergman complex. We also show that the equidistant tree with the least (Eucl...

  12. Reconstruction of sparse connectivity in neural networks from spike train covariances

    International Nuclear Information System (INIS)

    Pernice, Volker; Rotter, Stefan

    2013-01-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)

  13. Comparison of four surgical methods for eyebrow reconstruction

    Directory of Open Access Journals (Sweden)

    Omranifard Mahmood

    2007-01-01

    Full Text Available Background: The eyebrow plays an important role in facial harmony and eye protection. Eyebrows can be injured by burn, trauma, tumour, tattooing and alopecia. Eyebrow reconstructions have been done via several techniques. Here, our experience with a fairly new method for eyebrow reconstruction is presented. Materials and Methods: This is a descriptive-analytical study which was done on 76 patients at the Al-Zahra and Imam Mousa Kazem hospitals at Isfahan University of Medical University, Isfahan, Iran, from 1994 to 2004. Totally 86 eyebrows were reconstructed. All patients were examined before and after the operation. Methods which are commonly applied in eyebrow reconstruction are as follows: 1. Superficial Temporal Artery Flap (Island, 2. Interpolitation Scalp Flap, 3. Graft. Our method which is named Forehead Facial Island Flap with inferior pedicle provides an easier approach for the surgeon and more ideal hair growth direction for the patient. Results: Significantly lower rates of complication along with greater patient satisfaction were obtained with Forehead Facial Island Flap. Conclusions: According to the acquired results, this method seems to be more technically practical and aesthetically favourable when compared to others.

  14. Quantitative Efficiency Evaluation Method for Transportation Networks

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2014-11-01

    Full Text Available An effective evaluation of transportation network efficiency/performance is essential to the establishment of sustainable development in any transportation system. Based on a redefinition of transportation network efficiency, a quantitative efficiency evaluation method for transportation network is proposed, which could reflect the effects of network structure, traffic demands, travel choice, and travel costs on network efficiency. Furthermore, the efficiency-oriented importance measure for network components is presented, which can be used to help engineers identify the critical nodes and links in the network. The numerical examples show that, compared with existing efficiency evaluation methods, the network efficiency value calculated by the method proposed in this paper can portray the real operation situation of the transportation network as well as the effects of main factors on network efficiency. We also find that the network efficiency and the importance values of the network components both are functions of demands and network structure in the transportation network.

  15. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  16. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  17. Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.

    Science.gov (United States)

    Benazzi, Stefano; Senck, Sascha

    2011-04-01

    In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Recurrent neural network based hybrid model for reconstructing gene regulatory network.

    Science.gov (United States)

    Raza, Khalid; Alam, Mansaf

    2016-10-01

    One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. DG TOMO: A new method for tomographic reconstruction

    International Nuclear Information System (INIS)

    Freitas, D. de; Feschet, F.; Cachin, F.; Geissler, B.; Bapt, A.; Karidioula, I.; Martin, C.; Kelly, A.; Mestas, D.; Gerard, Y.; Reveilles, J.P.; Maublant, J.

    2006-01-01

    Aim: FBP and OSEM are the most popular tomographic reconstruction methods in scintigraphy. FBP is a simple method but artifacts of reconstruction are generated which corrections induce degradation of the spatial resolution. OSEM takes account of statistical fluctuations but noise strongly increases after a certain number of iterations. We compare a new method of tomographic reconstruction based on discrete geometry (DG TOMO) to FBP and OSEM. Materials and methods: Acquisitions were performed on a three-head gamma-camera (Philips) with a NEMA Phantom containing six spheres of sizes from 10 to 37 mm inner diameter, filled with around 325 MBq/l of technetium-99 m. The spheres were positioned in water containing 3 MBq/l of technetium-99 m. Acquisitions were realized during a 180 o -rotation around the phantom by 25-s steps. DG TOMO has been developed in our laboratory in order to minimize the number of projections at acquisition. Two tomographic reconstructions utilizing 32 and 16 projections with FBP, OSEM and DG TOMO were performed and transverse slices were compared. Results: FBP with 32 projections detects only the activity in the three largest spheres (diameter ≥22 mm). With 16 projections, the star effect is predominant and the contrast of the third sphere is very low. OSEM with 32 projections provides a better image but the three smallest spheres (diameter ≤17 mm) are difficult to distinguish. With 16 projections, the three smaller spheres are not detectable. The results of DG TOMO are similar to OSEM. Conclusion: Since the parameters of DG TOMO can be further optimized, this method appears as a promising alternative for tomoscintigraphy reconstruction

  20. Parallel MR image reconstruction using augmented Lagrangian methods.

    Science.gov (United States)

    Ramani, Sathish; Fessler, Jeffrey A

    2011-03-01

    Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.

  1. lpNet: a linear programming approach to reconstruct signal transduction networks.

    Science.gov (United States)

    Matos, Marta R A; Knapp, Bettina; Kaderali, Lars

    2015-10-01

    With the widespread availability of high-throughput experimental technologies it has become possible to study hundreds to thousands of cellular factors simultaneously, such as coding- or non-coding mRNA or protein concentrations. Still, extracting information about the underlying regulatory or signaling interactions from these data remains a difficult challenge. We present a flexible approach towards network inference based on linear programming. Our method reconstructs the interactions of factors from a combination of perturbation/non-perturbation and steady-state/time-series data. We show both on simulated and real data that our methods are able to reconstruct the underlying networks fast and efficiently, thus shedding new light on biological processes and, in particular, into disease's mechanisms of action. We have implemented the approach as an R package available through bioconductor. This R package is freely available under the Gnu Public License (GPL-3) from bioconductor.org (http://bioconductor.org/packages/release/bioc/html/lpNet.html) and is compatible with most operating systems (Windows, Linux, Mac OS) and hardware architectures. bettina.knapp@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Methods for reconstruction of the density distribution of nuclear power

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2015-01-01

    Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and

  3. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  4. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  5. The gridding method for image reconstruction by Fourier transformation

    International Nuclear Information System (INIS)

    Schomberg, H.; Timmer, J.

    1995-01-01

    This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform

  6. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  7. Reconstruction and validation of RefRec: a global model for the yeast molecular interaction network.

    Directory of Open Access Journals (Sweden)

    Tommi Aho

    2010-05-01

    Full Text Available Molecular interaction networks establish all cell biological processes. The networks are under intensive research that is facilitated by new high-throughput measurement techniques for the detection, quantification, and characterization of molecules and their physical interactions. For the common model organism yeast Saccharomyces cerevisiae, public databases store a significant part of the accumulated information and, on the way to better understanding of the cellular processes, there is a need to integrate this information into a consistent reconstruction of the molecular interaction network. This work presents and validates RefRec, the most comprehensive molecular interaction network reconstruction currently available for yeast. The reconstruction integrates protein synthesis pathways, a metabolic network, and a protein-protein interaction network from major biological databases. The core of the reconstruction is based on a reference object approach in which genes, transcripts, and proteins are identified using their primary sequences. This enables their unambiguous identification and non-redundant integration. The obtained total number of different molecular species and their connecting interactions is approximately 67,000. In order to demonstrate the capacity of RefRec for functional predictions, it was used for simulating the gene knockout damage propagation in the molecular interaction network in approximately 590,000 experimentally validated mutant strains. Based on the simulation results, a statistical classifier was subsequently able to correctly predict the viability of most of the strains. The results also showed that the usage of different types of molecular species in the reconstruction is important for accurate phenotype prediction. In general, the findings demonstrate the benefits of global reconstructions of molecular interaction networks. With all the molecular species and their physical interactions explicitly modeled, our

  8. Reconstruction and signal propagation analysis of the Syk signaling network in breast cancer cells.

    Directory of Open Access Journals (Sweden)

    Aurélien Naldi

    2017-03-01

    Full Text Available The ability to build in-depth cell signaling networks from vast experimental data is a key objective of computational biology. The spleen tyrosine kinase (Syk protein, a well-characterized key player in immune cell signaling, was surprisingly first shown by our group to exhibit an onco-suppressive function in mammary epithelial cells and corroborated by many other studies, but the molecular mechanisms of this function remain largely unsolved. Based on existing proteomic data, we report here the generation of an interaction-based network of signaling pathways controlled by Syk in breast cancer cells. Pathway enrichment of the Syk targets previously identified by quantitative phospho-proteomics indicated that Syk is engaged in cell adhesion, motility, growth and death. Using the components and interactions of these pathways, we bootstrapped the reconstruction of a comprehensive network covering Syk signaling in breast cancer cells. To generate in silico hypotheses on Syk signaling propagation, we developed a method allowing to rank paths between Syk and its targets. We first annotated the network according to experimental datasets. We then combined shortest path computation with random walk processes to estimate the importance of individual interactions and selected biologically relevant pathways in the network. Molecular and cell biology experiments allowed to distinguish candidate mechanisms that underlie the impact of Syk on the regulation of cortactin and ezrin, both involved in actin-mediated cell adhesion and motility. The Syk network was further completed with the results of our biological validation experiments. The resulting Syk signaling sub-networks can be explored via an online visualization platform.

  9. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control

    Directory of Open Access Journals (Sweden)

    Logsdon Benjamin A

    2012-04-01

    Full Text Available Abstract Background We propose a novel variational Bayes network reconstruction algorithm to extract the most relevant disease factors from high-throughput genomic data-sets. Our algorithm is the only scalable method for regularized network recovery that employs Bayesian model averaging and that can internally estimate an appropriate level of sparsity to ensure few false positives enter the model without the need for cross-validation or a model selection criterion. We use our algorithm to characterize the effect of genetic markers and liver gene expression traits on mouse obesity related phenotypes, including weight, cholesterol, glucose, and free fatty acid levels, in an experiment previously used for discovery and validation of network connections: an F2 intercross between the C57BL/6 J and C3H/HeJ mouse strains, where apolipoprotein E is null on the background. Results We identified eleven genes, Gch1, Zfp69, Dlgap1, Gna14, Yy1, Gabarapl1, Folr2, Fdft1, Cnr2, Slc24a3, and Ccl19, and a quantitative trait locus directly connected to weight, glucose, cholesterol, or free fatty acid levels in our network. None of these genes were identified by other network analyses of this mouse intercross data-set, but all have been previously associated with obesity or related pathologies in independent studies. In addition, through both simulations and data analysis we demonstrate that our algorithm achieves superior performance in terms of power and type I error control than other network recovery algorithms that use the lasso and have bounds on type I error control. Conclusions Our final network contains 118 previously associated and novel genes affecting weight, cholesterol, glucose, and free fatty acid levels that are excellent obesity risk candidates.

  10. Filtering of SPECT reconstructions made using Bellini's attenuation correction method

    International Nuclear Information System (INIS)

    Glick, S.J.; Penney, B.C.; King, M.A.

    1991-01-01

    This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing

  11. Image reconstruction methods for the PBX-M pinhole camera

    International Nuclear Information System (INIS)

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs

  12. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  13. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  14. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    NARCIS (Netherlands)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; Van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-01-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are

  15. Bayesian Models for Streamflow and River Network Reconstruction using Tree Rings

    Science.gov (United States)

    Ravindranath, A.; Devineni, N.

    2016-12-01

    Water systems face non-stationary, dynamically shifting risks due to shifting societal conditions and systematic long-term variations in climate manifesting as quasi-periodic behavior on multi-decadal time scales. Water systems are thus vulnerable to long periods of wet or dry hydroclimatic conditions. Streamflow is a major component of water systems and a primary means by which water is transported to serve ecosystems' and human needs. Thus, our concern is in understanding streamflow variability. Climate variability and impacts on water resources are crucial factors affecting streamflow, and multi-scale variability increases risk to water sustainability and systems. Dam operations are necessary for collecting water brought by streamflow while maintaining downstream ecological health. Rules governing dam operations are based on streamflow records that are woefully short compared to periods of systematic variation present in the climatic factors driving streamflow variability and non-stationarity. We use hierarchical Bayesian regression methods in order to reconstruct paleo-streamflow records for dams within a basin using paleoclimate proxies (e.g. tree rings) to guide the reconstructions. The riverine flow network for the entire basin is subsequently modeled hierarchically using feeder stream and tributary flows. This is a starting point in analyzing streamflow variability and risks to water systems, and developing a scientifically-informed dynamic risk management framework for formulating dam operations and water policies to best hedge such risks. We will apply this work to the Missouri and Delaware River Basins (DRB). Preliminary results of streamflow reconstructions for eight dams in the upper DRB using standard Gaussian regression with regional tree ring chronologies give streamflow records that now span two to two and a half centuries, and modestly smoothed versions of these reconstructed flows indicate physically-justifiable trends in the time series.

  16. The association between reconstructed phase space and Artificial Neural Networks for vectorcardiographic recognition of myocardial infarction.

    Science.gov (United States)

    Costa, Cecília M; Silva, Ittalo S; de Sousa, Rafael D; Hortegal, Renato A; Regis, Carlos Danilo M

    Myocardial infarction is one of the leading causes of death worldwide. As it is life threatening, it requires an immediate and precise treatment. Due to this, a growing number of research and innovations in the field of biomedical signal processing is in high demand. This paper proposes the association of Reconstructed Phase Space and Artificial Neural Networks for Vectorcardiography Myocardial Infarction Recognition. The algorithm promotes better results for the box size 10 × 10 and the combination of four parameters: box counting (Vx), box counting (Vz), self-similarity method (Vx) and self-similarity method (Vy) with sensitivity = 92%, specificity = 96% and accuracy = 94%. The topographic diagnosis presented different performances for different types of infarctions with better results for anterior wall infarctions and less accurate results for inferior infarctions. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Sediment core and glacial environment reconstruction - a method review

    Science.gov (United States)

    Bakke, Jostein; Paasche, Øyvind

    2010-05-01

    Alpine glaciers are often located in remote and high-altitude regions of the world, areas that only rarely are covered by instrumental records. Reconstructions of glaciers has therefore proven useful for understanding past climate dynamics on both shorter and longer time-scales. One major drawback with glacier reconstructions based solely on moraine chronologies - by far the most common -, is that due to selective preservation of moraine ridges such records do not exclude the possibility of multiple Holocene glacier advances. This problem is true regardless whether cosmogenic isotopes or lichenometry have been used to date the moraines, or also radiocarbon dating of mega-fossils buried in till or underneath the moraines themselves. To overcome this problem Karlén (1976) initially suggested that glacial erosion and the associated production of rock-flour deposited in downstream lakes could provide a continuous record of glacial fluctuations, hence overcoming the problem of incomplete reconstructions. We want to discuss the methods used to reconstruct past glacier activity based on sediments deposited in distal glacier-fed lakes. By quantifying physical properties of glacial and extra-glacial sediments deposited in catchments, and in downstream lakes and fjords, it is possible to isolate and identify past glacier activity - size and production rate - that subsequently can be used to reconstruct changing environmental shifts and trends. Changes in average sediment evacuation from alpine glaciers are mainly governed by glacier size and the mass turnover gradient, determining the deformation rate at any given time. The amount of solid precipitation (mainly winter accumulation) versus loss due to melting during the ablation-season (mainly summer temperature) determines the mass turnover gradient in either positive or negative direction. A prevailing positive net balance will lead to higher sedimentation rates and vice versa, which in turn can be recorded in downstream

  18. Skin sparing mastectomy: Technique and suggested methods of reconstruction

    International Nuclear Information System (INIS)

    Farahat, A.M.; Hashim, T.; Soliman, H.O.; Manie, T.M.; Soliman, O.M.

    2014-01-01

    To demonstrate the feasibility and accessibility of performing adequate mastectomy to extirpate the breast tissue, along with en-block formal axillary dissection performed from within the same incision. We also compared different methods of immediate breast reconstruction used to fill the skin envelope to achieve the best aesthetic results. Methods: 38 patients with breast cancer underwent skin-sparing mastectomy with formal axillary clearance, through a circum-areolar incision. Immediate breast reconstruction was performed using different techniques to fill in the skin envelope. Two reconstruction groups were assigned; group 1: Autologus tissue transfer only (n= 24), and group 2: implant augmentation (n= 14). Autologus tissue transfer: The techniques used included filling in the skin envelope using Extended Latissimus Dorsi flap (18 patients) and Pedicled TRAM flap (6 patients). Augmentation with implants: Subpectoral implants(4 patients), a rounded implant placed under the pectoralis major muscle to augment an LD reconstructed breast. LD pocket (10 patients), an anatomical implant placed over the pectoralis major muscle within a pocket created by the LD flap. No contra-lateral procedure was performed in any of the cases to achieve symmetry. Results: All cases underwent adequate excision of the breast tissue along with en-block complete axillary clearance (when indicated), without the need for an additional axillary incision. Eighteen patients underwent reconstruction using extended LD flaps only, six had TRAM flaps, four had augmentation using implants placed below the pectoralis muscle along with LD flaps, and ten had implants placed within the LD pocket. Breast shape, volume and contour were successfully restored in all patients. Adequate degree of ptosis was achieved, to ensure maximal symmetry. Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian

  19. Use of regularized algebraic methods in tomographic reconstruction

    International Nuclear Information System (INIS)

    Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.

    1997-01-01

    The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable

  20. A New Method for Coronal Magnetic Field Reconstruction

    Science.gov (United States)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  1. Reconstruction of Banknote Fragments Based on Keypoint Matching Method.

    Science.gov (United States)

    Gwo, Chih-Ying; Wei, Chia-Hung; Li, Yue; Chiu, Nan-Hsing

    2015-07-01

    Banknotes may be shredded by a scrap machine, ripped up by hand, or damaged in accidents. This study proposes an image registration method for reconstruction of multiple sheets of banknotes. The proposed method first constructs different scale spaces to identify keypoints in the underlying banknote fragments. Next, the features of those keypoints are extracted to represent their local patterns around keypoints. Then, similarity is computed to find the keypoint pairs between the fragment and the reference banknote. The banknote fragments can determine the coordinate and amend the orientation. Finally, an assembly strategy is proposed to piece multiple sheets of banknote fragments together. Experimental results show that the proposed method causes, on average, a deviation of 0.12457 ± 0.12810° for each fragment while the SIFT method deviates 1.16893 ± 2.35254° on average. The proposed method not only reconstructs the banknotes but also decreases the computing cost. Furthermore, the proposed method can estimate relatively precisely the orientation of the banknote fragments to assemble. © 2015 American Academy of Forensic Sciences.

  2. Reconstruction of biological networks based on life science data integration

    Directory of Open Access Journals (Sweden)

    Kormeier Benjamin

    2010-06-01

    Full Text Available For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH - an integration toolkit for building life science data warehouses, CardioVINEdb - a information system for biological data in cardiovascular-disease and VANESA- a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  3. Reconstruction of biological networks based on life science data integration.

    Science.gov (United States)

    Kormeier, Benjamin; Hippe, Klaus; Arrigo, Patrizio; Töpel, Thoralf; Janowski, Sebastian; Hofestädt, Ralf

    2010-10-27

    For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH--an integration toolkit for building life science data warehouses, CardioVINEdb--a information system for biological data in cardiovascular-disease and VANESA--a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  4. Neural network and area method interpretation of pulsed experiments

    Energy Technology Data Exchange (ETDEWEB)

    Dulla, S.; Picca, P.; Ravetto, P. [Politecnico di Torino, Dipartimento di Energetica, Corso Duca degli Abruzzi, 24 - 10129 Torino (Italy); Canepa, S. [Lab of Reactor Physics and Systems Behaviour LRS, Paul Scherrer Inst., 5232 Villigen (Switzerland)

    2012-07-01

    The determination of the subcriticality level is an important issue in accelerator-driven system technology. The area method, originally introduced by N. G. Sjoestrand, is a classical technique to interpret flux measurement for pulsed experiments in order to reconstruct the reactivity value. In recent times other methods have also been developed, to account for spatial and spectral effects, which were not included in the area method, since it is based on the point kinetic model. The artificial neural network approach can be an efficient technique to infer reactivities from pulsed experiments. In the present work, some comparisons between the two methods are carried out and discussed. (authors)

  5. Dynamic Regulatory Network Reconstruction for Alzheimer’s Disease Based on Matrix Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Wei Kong

    2014-01-01

    Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.

  6. Artificial Neural Networks for SCADA Data based Load Reconstruction (poster)

    NARCIS (Netherlands)

    Hofemann, C.; Van Bussel, G.J.W.; Veldkamp, H.

    2011-01-01

    If at least one reference wind turbine is available, which provides sufficient information about the wind turbine loads, the loads acting on the neighbouring wind turbines can be predicted via an artificial neural network (ANN). This research explores the possibilities to apply such a network not

  7. Reconstruction of neutron spectra through neural networks; Reconstruccion de espectros de neutrones mediante redes neuronales

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Cuerpo Academico de Radiobiologia, Estudios Nucleares, Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)] e-mail: rvega@cantera.reduaz.mx [and others

    2003-07-01

    A neural network has been used to reconstruct the neutron spectra starting from the counting rates of the detectors of the Bonner sphere spectrophotometric system. A group of 56 neutron spectra was selected to calculate the counting rates that would produce in a Bonner sphere system, with these data and the spectra it was trained the neural network. To prove the performance of the net, 12 spectra were used, 6 were taken of the group used for the training, 3 were obtained of mathematical functions and those other 3 correspond to real spectra. When comparing the original spectra of those reconstructed by the net we find that our net has a poor performance when reconstructing monoenergetic spectra, this attributes it to those characteristic of the spectra used for the training of the neural network, however for the other groups of spectra the results of the net are appropriate with the prospective ones. (Author)

  8. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  9. A Short-Circuit Method for Networks.

    Science.gov (United States)

    Ong, P. P.

    1983-01-01

    Describes a method of network analysis that allows avoidance of Kirchoff's Laws (providing the network is symmetrical) by reduction to simple series/parallel resistances. The method can be extended to symmetrical alternating current, capacitance or inductance if corresponding theorems are used. Symmetric cubic network serves as an example. (JM)

  10. Reconstruction, visualization and explorative analysis of human pluripotency network

    Directory of Open Access Journals (Sweden)

    Priyanka Narad

    2017-09-01

    Full Text Available Identification of genes/proteins involved in pluripotency and their inter-relationships is important for understanding the induction/loss and maintenance of pluripotency. With the availability of large volume of data on interaction/regulation of pluripotency scattered across a large number of biological databases and hundreds of scientific journals, it is required a systematic integration of data which will create a complete view of pluripotency network. Describing and interpreting such a network of interaction and regulation (i.e., stimulation and inhibition links are essential tasks of computational biology, an important first step in systems-level understanding of the underlying mechanisms of pluripotency. To address this, we have assembled a network of 166 molecular interactions, stimulations and inhibitions, based on a collection of research data from 147 publications, involving 122 human genes/proteins, all in a standard electronic format, enabling analyses by readily available software such as Cytoscape and its Apps (formerly called "Plugins". The network includes the core circuit of OCT4 (POU5F1, SOX2 and NANOG, its periphery (such as STAT3, KLF4, UTF1, ZIC3, and c-MYC, connections to upstream signaling pathways (such as ACTIVIN, WNT, FGF, and BMP, and epigenetic regulators (such as L1TD1, LSD1 and PRC2. We describe the general properties of the network and compare it with other literature-based networks. Gene Ontology (GO analysis is being performed to find out the over-represented GO terms in the network. We use several expression datasets to condense the network to a set of network links that identify the key players (genes/proteins and the pathways involved in transition from one state of pluripotency to other state (i.e., native to primed state, primed to non-pluripotent state and pluripotent to non-pluripotent state.

  11. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.

    Science.gov (United States)

    Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan

    2010-12-01

    Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.

  12. Optical wedge method for spatial reconstruction of particle trajectories

    International Nuclear Information System (INIS)

    Asatiani, T.L.; Alchudzhyan, S.V.; Gazaryan, K.A.; Zograbyan, D.Sh.; Kozliner, L.I.; Krishchyan, V.M.; Martirosyan, G.S.; Ter-Antonyan, S.V.

    1978-01-01

    A technique of optical wedges allowing the full reconstruction of pictures of events in space is considered. The technique is used for the detection of particle tracks in optical wide-gap spark chambers by photographing in one projection. The optical wedges are refracting right-angle plastic prisms positioned between the camera and the spark chamber so that through them both ends of the track are photographed. A method for calibrating measurements is given, and an estimate made of the accuracy of the determination of the second projection with the help of the optical wedges

  13. An Optimized Method for Terrain Reconstruction Based on Descent Images

    Directory of Open Access Journals (Sweden)

    Xu Xinchao

    2016-02-01

    Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.

  14. Track reconstruction in discrete detectors by neutral networks

    Energy Technology Data Exchange (ETDEWEB)

    Glazov, A A; Kisel` , I V; Konotopskaya, E V; Neskoromnyj, V N; Ososkov, G A

    1993-12-31

    On the basis of applying neutral networks to the track recognition problem the investigations are made according to the specific properties of such discrete detectors as multiwire proportional chambers. These investigations result in the modification of the so-called rotor model in a neutral neural network. The energy function of a network in this modification contains only one cost term. This speeds up calculations considerably. The reduction of the energy function is done by the neuron selection with the help of simplegeometrical and energetical criteria. Besides, the cellular automata were applied to preliminary selection of data that made it possible to create an initial network configuration with the energy closer to its global minimum. The algorithm was tested on 10{sup 4} real three-prong events obtained from the ARES-spectrometer. The results are satisfactory including the noise robustness and good resolution of nearby going tracks. 12 refs.; 10 figs.

  15. Track reconstruction in discrete detectors by neutral networks

    International Nuclear Information System (INIS)

    Glazov, A.A.; Kisel', I.V.; Konotopskaya, E.V.; Neskoromnyj, V.N.; Ososkov, G.A.

    1992-01-01

    On the basis of applying neutral networks to the track recognition problem the investigations are made according to the specific properties of such discrete detectors as multiwire proportional chambers. These investigations result in the modification of the so-called rotor model in a neutral neural network. The energy function of a network in this modification contains only one cost term. This speeds up calculations considerably. The reduction of the energy function is done by the neuron selection with the help of simplegeometrical and energetical criteria. Besides, the cellular automata were applied to preliminary selection of data that made it possible to create an initial network configuration with the energy closer to its global minimum. The algorithm was tested on 10 4 real three-prong events obtained from the ARES-spectrometer. The results are satisfactory including the noise robustness and good resolution of nearby going tracks. 12 refs.; 10 figs

  16. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    Directory of Open Access Journals (Sweden)

    Heavner Benjamin D

    2012-06-01

    Full Text Available Abstract Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Additional file 1 Function testYeastModel.m.m. Click here for file Additional file 2 Function modelToReconstruction

  17. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  18. A swarm intelligence framework for reconstructing gene networks: searching for biologically plausible architectures.

    Science.gov (United States)

    Kentzoglanakis, Kyriakos; Poole, Matthew

    2012-01-01

    In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.

  19. Comparative genomic reconstruction of transcriptional networks controlling central metabolism in the Shewanella genus

    Directory of Open Access Journals (Sweden)

    Kovaleva Galina

    2011-06-01

    Full Text Available Abstract Background Genome-scale prediction of gene regulation and reconstruction of transcriptional regulatory networks in bacteria is one of the critical tasks of modern genomics. The Shewanella genus is comprised of metabolically versatile gamma-proteobacteria, whose lifestyles and natural environments are substantially different from Escherichia coli and other model bacterial species. The comparative genomics approaches and computational identification of regulatory sites are useful for the in silico reconstruction of transcriptional regulatory networks in bacteria. Results To explore conservation and variations in the Shewanella transcriptional networks we analyzed the repertoire of transcription factors and performed genomics-based reconstruction and comparative analysis of regulons in 16 Shewanella genomes. The inferred regulatory network includes 82 transcription factors and their DNA binding sites, 8 riboswitches and 6 translational attenuators. Forty five regulons were newly inferred from the genome context analysis, whereas others were propagated from previously characterized regulons in the Enterobacteria and Pseudomonas spp.. Multiple variations in regulatory strategies between the Shewanella spp. and E. coli include regulon contraction and expansion (as in the case of PdhR, HexR, FadR, numerous cases of recruiting non-orthologous regulators to control equivalent pathways (e.g. PsrA for fatty acid degradation and, conversely, orthologous regulators to control distinct pathways (e.g. TyrR, ArgR, Crp. Conclusions We tentatively defined the first reference collection of ~100 transcriptional regulons in 16 Shewanella genomes. The resulting regulatory network contains ~600 regulated genes per genome that are mostly involved in metabolism of carbohydrates, amino acids, fatty acids, vitamins, metals, and stress responses. Several reconstructed regulons including NagR for N-acetylglucosamine catabolism were experimentally validated in S

  20. Using a neural network approach for muon reconstruction and triggering

    CERN Document Server

    Etzion, E; Abramowicz, H; Benhammou, Ya; Horn, D; Levinson, L; Livneh, R

    2004-01-01

    The extremely high rate of events that will be produced in the future Large Hadron Collider requires the triggering mechanism to take precise decisions in a few nano-seconds. We present a study which used an artificial neural network triggering algorithm and compared it to the performance of a dedicated electronic muon triggering system. Relatively simple architecture was used to solve a complicated inverse problem. A comparison with a realistic example of the ATLAS first level trigger simulation was in favour of the neural network. A similar architecture trained after the simulation of the electronics first trigger stage showed a further background rejection.

  1. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  2. The parallel implementation of a backpropagation neural network and its applicability to SPECT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, John Patrick [Iowa State Univ., Ames, IA (United States)

    1992-01-01

    The objective of this study was to determine the feasibility of using an Artificial Neural Network (ANN), in particular a backpropagation ANN, to improve the speed and quality of the reconstruction of three-dimensional SPECT (single photon emission computed tomography) images. In addition, since the processing elements (PE)s in each layer of an ANN are independent of each other, the speed and efficiency of the neural network architecture could be better optimized by implementing the ANN on a massively parallel computer. The specific goals of this research were: to implement a fully interconnected backpropagation neural network on a serial computer and a SIMD parallel computer, to identify any reduction in the time required to train these networks on the parallel machine versus the serial machine, to determine if these neural networks can learn to recognize SPECT data by training them on a section of an actual SPECT image, and to determine from the knowledge obtained in this research if full SPECT image reconstruction by an ANN implemented on a parallel computer is feasible both in time required to train the network, and in quality of the images reconstructed.

  3. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  4. Iterative reconstruction methods for Thermo-acoustic Tomography

    International Nuclear Information System (INIS)

    Marinesque, Sebastien

    2012-01-01

    We define, study and implement various iterative reconstruction methods for Thermo-acoustic Tomography (TAT): the Back and Forth Nudging (BFN), easy to implement and to use, a variational technique (VT) and the Back and Forth SEEK (BF-SEEK), more sophisticated, and a coupling method between Kalman filter (KF) and Time Reversal (TR). A unified formulation is explained for the sequential techniques aforementioned that defines a new class of inverse problem methods: the Back and Forth Filters (BFF). In addition to existence and uniqueness (particularly for backward solutions), we study many frameworks that ensure and characterize the convergence of the algorithms. Thus we give a general theoretical framework for which the BFN is a well-posed problem. Then, in application to TAT, existence and uniqueness of its solutions and geometrical convergence of the algorithm are proved, and an explicit convergence rate and a description of its numerical behaviour are given. Next, theoretical and numerical studies of more general and realistic framework are led, namely different objects, speeds (with or without trapping), various sensor configurations and samplings, attenuated equations or external sources. Then optimal control and best estimate tools are used to characterize the BFN convergence and converging feedbacks for BFF, under observability assumptions. Finally, we compare the most flexible and efficient current techniques (TR and an iterative variant) with our various BFF and the VT in several experiments. Thus, robust, with different possible complexities and flexible, the methods that we propose are very interesting reconstruction techniques, particularly in TAT and when observations are degraded. (author) [fr

  5. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems

    Directory of Open Access Journals (Sweden)

    Faridah Hani Mohamed Salleh

    2017-01-01

    Full Text Available Gene regulatory network (GRN reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C as a direct interaction (A → C. Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  6. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems.

    Science.gov (United States)

    Salleh, Faridah Hani Mohamed; Zainudin, Suhaila; Arif, Shereena M

    2017-01-01

    Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  7. Reconstructing missing daily precipitation data using regression trees and artificial neural networks

    Science.gov (United States)

    Incomplete meteorological data has been a problem in environmental modeling studies. The objective of this work was to develop a technique to reconstruct missing daily precipitation data in the central part of Chesapeake Bay Watershed using regression trees (RT) and artificial neural networks (ANN)....

  8. Overview of the neural network based technique for monitoring of road condition via reconstructed road profiles

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2008-07-01

    Full Text Available on the road and driver to assess the integrity of road and vehicle infrastructure. In this paper, vehicle vibration data are applied to an artificial neural network to reconstruct the corresponding road surface profiles. The results show that the technique...

  9. Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2010-04-01

    Full Text Available -1 Journal of Terramechanics Volume 47, Issue 2, April 2010, Pages 97-111 Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation H.M. Ngwangwaa, P.S. Heynsa, , , F...

  10. A new method for depth profiling reconstruction in confocal microscopy

    Science.gov (United States)

    Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe

    2018-05-01

    Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.

  11. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  12. Reconstruction of an engine combustion process with a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, P J; Gu, F; Ball, A D [School of Engineering, University of Manchester, Manchester (United Kingdom)

    1998-12-31

    The cylinder pressure waveform in an internal combustion engine is one of the most important parameters in describing the engine combustion process. It is used for a range of diagnostic tasks such as identification of ignition faults or mechanical wear in the cylinders. However, it is very difficult to measure this parameter directly. Never-the-less, the cylinder pressure may be inferred from other more readily obtainable parameters. In this presentation it is shown how a Radial Basis Function network, which may be regarded as a form of neural network, may be used to model the cylinder pressure as a function of the instantaneous crankshaft velocity, recorded with a simple magnetic sensor. The application of the model is demonstrated on a four cylinder DI diesel engine with data from a wide range of speed and load settings. The prediction capabilities of the model once trained are validated against measured data. (orig.) 4 refs.

  13. Reconstruction of an engine combustion process with a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, P.J.; Gu, F.; Ball, A.D. [School of Engineering, University of Manchester, Manchester (United Kingdom)

    1997-12-31

    The cylinder pressure waveform in an internal combustion engine is one of the most important parameters in describing the engine combustion process. It is used for a range of diagnostic tasks such as identification of ignition faults or mechanical wear in the cylinders. However, it is very difficult to measure this parameter directly. Never-the-less, the cylinder pressure may be inferred from other more readily obtainable parameters. In this presentation it is shown how a Radial Basis Function network, which may be regarded as a form of neural network, may be used to model the cylinder pressure as a function of the instantaneous crankshaft velocity, recorded with a simple magnetic sensor. The application of the model is demonstrated on a four cylinder DI diesel engine with data from a wide range of speed and load settings. The prediction capabilities of the model once trained are validated against measured data. (orig.) 4 refs.

  14. Charged particle track reconstruction using artificial neural networks

    International Nuclear Information System (INIS)

    Glover, C.; Fu, P.; Gabriel, T.; Handler, T.

    1992-01-01

    This paper summarizes the current state of our research in developing and applying artificial neural network (ANN) algorithm described here is based on a crude model of the retina. It takes as input the coordinates of each charged particle's interaction point (''hit'') in the tracking chamber. The algorithm's output is a set of vectors pointing to other hits that most likely to form a track

  15. Genome-Scale Reconstruction of the Human Astrocyte Metabolic Network

    OpenAIRE

    Mart?n-Jim?nez, Cynthia A.; Salazar-Barreto, Diego; Barreto, George E.; Gonz?lez, Janneth

    2017-01-01

    Astrocytes are the most abundant cells of the central nervous system; they have a predominant role in maintaining brain metabolism. In this sense, abnormal metabolic states have been found in different neuropathological diseases. Determination of metabolic states of astrocytes is difficult to model using current experimental approaches given the high number of reactions and metabolites present. Thus, genome-scale metabolic networks derived from transcriptomic data can be used as a framework t...

  16. MO-DE-209-02: Tomosynthesis Reconstruction Methods

    International Nuclear Information System (INIS)

    Mainprize, J.

    2016-01-01

    Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support

  17. MO-DE-209-02: Tomosynthesis Reconstruction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Mainprize, J. [Sunnybrook Health Sciences Centre, Toronto, ON (Canada)

    2016-06-15

    Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support

  18. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Directory of Open Access Journals (Sweden)

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  19. Reconstructing Generalized Logical Networks of Transcriptional Regulation in Mouse Brain from Temporal Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Mingzhou (Joe) [New Mexico State University, Las Cruces; Lewis, Chris K. [New Mexico State University, Las Cruces; Lance, Eric [New Mexico State University, Las Cruces; Chesler, Elissa J [ORNL; Kirova, Roumyana [Bristol-Myers Squibb Pharmaceutical Research & Development, NJ; Langston, Michael A [University of Tennessee, Knoxville (UTK); Bergeson, Susan [Texas Tech University, Lubbock

    2009-01-01

    The problem of reconstructing generalized logical networks to account for temporal dependencies among genes and environmental stimuli from high-throughput transcriptomic data is addressed. A network reconstruction algorithm was developed that uses the statistical significance as a criterion for network selection to avoid false-positive interactions arising from pure chance. Using temporal gene expression data collected from the brains of alcohol-treated mice in an analysis of the molecular response to alcohol, this algorithm identified genes from a major neuronal pathway as putative components of the alcohol response mechanism. Three of these genes have known associations with alcohol in the literature. Several other potentially relevant genes, highlighted and agreeing with independent results from literature mining, may play a role in the response to alcohol. Additional, previously-unknown gene interactions were discovered that, subject to biological verification, may offer new clues in the search for the elusive molecular mechanisms of alcoholism.

  20. Cellular neural networks, the Navier-Stokes equation, and microarray image reconstruction.

    Science.gov (United States)

    Zineddin, Bachar; Wang, Zidong; Liu, Xiaohui

    2011-11-01

    Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier-Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time.

  1. A novel mechanochemical method for reconstructing the moisture-degraded HKUST-1.

    Science.gov (United States)

    Sun, Xuejiao; Li, Hao; Li, Yujie; Xu, Feng; Xiao, Jing; Xia, Qibin; Li, Yingwei; Li, Zhong

    2015-07-11

    A novel mechanochemical method was proposed to reconstruct quickly moisture-degraded HKUST-1. The degraded HKUST-1 can be restored within minutes. The reconstructed samples were characterized, and confirmed to have 95% surface area and 92% benzene capacity of the fresh HKUST-1. It is a simple and effective strategy for degraded MOF reconstruction.

  2. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  3. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    Directory of Open Access Journals (Sweden)

    Kumari Sonal Choudhary

    2016-06-01

    Full Text Available Epithelial to mesenchymal transition (EMT is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR, are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E and mesenchymal (EGFR_M networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend.

  4. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    Science.gov (United States)

    Choudhary, Kumari Sonal; Rohatgi, Neha; Halldorsson, Skarphedinn; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-06-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend.

  5. Reconstructing Late Holocene North Atlantic atmospheric circulation changes using functional paleoclimate networks

    Science.gov (United States)

    Franke, Jasper G.; Werner, Johannes P.; Donner, Reik V.

    2017-11-01

    Obtaining reliable reconstructions of long-term atmospheric circulation changes in the North Atlantic region presents a persistent challenge to contemporary paleoclimate research, which has been addressed by a multitude of recent studies. In order to contribute a novel methodological aspect to this active field, we apply here evolving functional network analysis, a recently developed tool for studying temporal changes of the spatial co-variability structure of the Earth's climate system, to a set of Late Holocene paleoclimate proxy records covering the last two millennia. The emerging patterns obtained by our analysis are related to long-term changes in the dominant mode of atmospheric circulation in the region, the North Atlantic Oscillation (NAO). By comparing the time-dependent inter-regional linkage structures of the obtained functional paleoclimate network representations to a recent multi-centennial NAO reconstruction, we identify co-variability between southern Greenland, Svalbard, and Fennoscandia as being indicative of a positive NAO phase, while connections from Greenland and Fennoscandia to central Europe are more pronounced during negative NAO phases. By drawing upon this correspondence, we use some key parameters of the evolving network structure to obtain a qualitative reconstruction of the NAO long-term variability over the entire Common Era (last 2000 years) using a linear regression model trained upon the existing shorter reconstruction.

  6. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Takeshi Hase

    Full Text Available Elucidating gene regulatory network (GRN from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  7. A random network based, node attraction facilitated network evolution method

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2016-03-01

    Full Text Available In present study, I present a method of network evolution that based on random network, and facilitated by node attraction. In this method, I assume that the initial network is a random network, or a given initial network. When a node is ready to connect, it tends to link to the node already owning the most connections, which coincides with the general rule (Barabasi and Albert, 1999 of node connecting. In addition, a node may randomly disconnect a connection i.e., the addition of connections in the network is accompanied by the pruning of some connections. The dynamics of network evolution is determined of the attraction factor Lamda of nodes, the probability of node connection, the probability of node disconnection, and the expected initial connectance. The attraction factor of nodes, the probability of node connection, and the probability of node disconnection are time and node varying. Various dynamics can be achieved by adjusting these parameters. Effects of simplified parameters on network evolution are analyzed. The changes of attraction factor Lamda can reflect various effects of the node degree on connection mechanism. Even the changes of Lamda only will generate various networks from the random to the complex. Therefore, the present algorithm can be treated as a general model for network evolution. Modeling results show that to generate a power-law type of network, the likelihood of a node attracting connections is dependent upon the power function of the node's degree with a higher-order power. Matlab codes for simplified version of the method are provided.

  8. An algebra-based method for inferring gene regulatory networks.

    Science.gov (United States)

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the

  9. Genome scale metabolic network reconstruction of Spirochaeta cellobiosiphila

    Directory of Open Access Journals (Sweden)

    Bharat Manna

    2017-10-01

    Full Text Available Substantial rise in the global energy demand is one of the biggest challenges in this century. Environmental pollution due to rapid depletion of the fossil fuel resources and its alarming impact on the climate change and Global Warming have motivated researchers to look for non-petroleum-based sustainable, eco-friendly, renewable, low-cost energy alternatives, such as biofuel. Lignocellulosic biomass is one of the most promising bio-resources with huge potential to contribute to this worldwide energy demand. However, the complex organization of the Cellulose, Hemicellulose and Lignin in the Lignocellulosic biomass requires extensive pre-treatment and enzymatic hydrolysis followed by fermentation, raising overall production cost of biofuel. This encourages researchers to design cost-effective approaches for the production of second generation biofuels. The products from enzymatic hydrolysis of cellulose are mostly glucose monomer or cellobiose unit that are subjected to fermentation. Spirochaeta genus is a well-known group of obligate or facultative anaerobes, living primarily on carbohydrate metabolism. Spirochaeta cellobiosiphila sp. is a facultative anaerobe under this genus, which uses a variety of monosaccharides and disaccharides as energy sources. However, most rapid growth occurs on cellobiose and fermentation yields significant amount of ethanol, acetate, CO2, H2 and small amounts of formate. It is predicted to be promising microbial machinery for industrial fermentation processes for biofuel production. The metabolic pathways that govern cellobiose metabolism in Spirochaeta cellobiosiphila are yet to be explored. The function annotation of the genome sequence of Spirochaeta cellobiosiphila is in progress. In this work we aim to map all the metabolic activities for reconstruction of genome-scale metabolic model of Spirochaeta cellobiosiphila.

  10. Reconstruction of the El Nino attractor with neural networks

    International Nuclear Information System (INIS)

    Grieger, B.; Latif, M.

    1993-01-01

    Based on a combined data set of sea surface temperature, zonal surface wind stress and upper ocean heat content the dynamics of the El Nino phenomenon is investigated. In a reduced phase space spanned by the first four EOFs two different stochastic models are estimated from the data. A nonlinear model represented by a simulated neural network is compared with a linear model obtained with the Principal Oscillation Pattern (POP) analysis. While the linear model is limited to damped oscillations onto a fix point attractor, the nonlinear model recovers a limit cycle attractor. This indicates that the real system is located above the bifurcation point in parameter space supporting self-sustained oscillations. The results are discussed with respect to consistency with current theory. (orig.)

  11. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  12. Reconstruction of t anti tH (H → bb) events using deep neural networks with the CMS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rieger, Marcel; Erdmann, Martin; Fischer, Benjamin; Fischer, Robert; Heidemann, Fabian; Quast, Thorben; Rath, Yannik [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The measurement of Higgs boson production in association with top-quark pairs (t anti tH) is an important goal of Run 2 of the LHC as it allows for a direct measurement of the underlying Yukawa coupling. Due to the complex final state, however, the analysis of semi-leptonic t anti tH events with the Higgs boson decaying into a pair of bottom-quarks is challenging. A promising method for tackling jet parton associations are Deep Neural Networks (DNN). While being a widely spread machine learning algorithm in modern industry, DNNs are on the way to becoming established in high energy physics. We present a study on the reconstruction of the final state using DNNs, comparing to Boosted Decision Trees (BDT) as benchmark scenario. This is accomplished by generating permutations of simulated events and comparing them with truth information to extract reconstruction efficiencies.

  13. Summer drought reconstruction in northeastern Spain inferred from a tree ring latewood network since 1734

    Science.gov (United States)

    Tejedor, E.; Saz, M. A.; Esper, J.; Cuadrat, J. M.; de Luis, M.

    2017-08-01

    Drought recurrence in the Mediterranean is regarded as a fundamental factor for socioeconomic development and the resilience of natural systems in context of global change. However, knowledge of past droughts has been hampered by the absence of high-resolution proxies. We present a drought reconstruction for the northeast of the Iberian Peninsula based on a new dendrochronology network considering the Standardized Evapotranspiration Precipitation Index (SPEI). A total of 774 latewood width series from 387 trees of P. sylvestris and P. uncinata was combined in an interregional chronology. The new chronology, calibrated against gridded climate data, reveals a robust relationship with the SPEI representing drought conditions of July and August. We developed a summer drought reconstruction for the period 1734-2013 representative for the northeastern and central Iberian Peninsula. We identified 16 extremely dry and 17 extremely wet summers and four decadal scale dry and wet periods, including 2003-2013 as the driest episode of the reconstruction.

  14. Benchmarking burnup reconstruction methods for dynamically operated research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-01

    The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include 148Nd, 137Cs+137Ba, 139La, and 145Nd+146Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.

  15. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  16. Method and system for mesh network embedded devices

    Science.gov (United States)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  17. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  18. A Systematic, Automated Network Planning Method

    DEFF Research Database (Denmark)

    Holm, Jens Åge; Pedersen, Jens Myrup

    2006-01-01

    This paper describes a case study conducted to evaluate the viability of a systematic, automated network planning method. The motivation for developing the network planning method was that many data networks are planned in an adhoc manner with no assurance of quality of the solution with respect...... structures, that are ready to implement in a real world scenario, are discussed in the end of the paper. These are in the area of ensuring line independence and complexity of the design rules for the planning method....

  19. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  20. Engine cylinder pressure reconstruction using crank kinematics and recurrently-trained neural networks

    Science.gov (United States)

    Bennett, C.; Dunne, J. F.; Trimby, S.; Richardson, D.

    2017-02-01

    A recurrent non-linear autoregressive with exogenous input (NARX) neural network is proposed, and a suitable fully-recurrent training methodology is adapted and tuned, for reconstructing cylinder pressure in multi-cylinder IC engines using measured crank kinematics. This type of indirect sensing is important for cost effective closed-loop combustion control and for On-Board Diagnostics. The challenge addressed is to accurately predict cylinder pressure traces within the cycle under generalisation conditions: i.e. using data not previously seen by the network during training. This involves direct construction and calibration of a suitable inverse crank dynamic model, which owing to singular behaviour at top-dead-centre (TDC), has proved difficult via physical model construction, calibration, and inversion. The NARX architecture is specialised and adapted to cylinder pressure reconstruction, using a fully-recurrent training methodology which is needed because the alternatives are too slow and unreliable for practical network training on production engines. The fully-recurrent Robust Adaptive Gradient Descent (RAGD) algorithm, is tuned initially using synthesised crank kinematics, and then tested on real engine data to assess the reconstruction capability. Real data is obtained from a 1.125 l, 3-cylinder, in-line, direct injection spark ignition (DISI) engine involving synchronised measurements of crank kinematics and cylinder pressure across a range of steady-state speed and load conditions. The paper shows that a RAGD-trained NARX network using both crank velocity and crank acceleration as input information, provides fast and robust training. By using the optimum epoch identified during RAGD training, acceptably accurate cylinder pressures, and especially accurate location-of-peak-pressure, can be reconstructed robustly under generalisation conditions, making it the most practical NARX configuration and recurrent training methodology for use on production engines.

  1. Constructing an Intelligent Patent Network Analysis Method

    Directory of Open Access Journals (Sweden)

    Chao-Chan Wu

    2012-11-01

    Full Text Available Patent network analysis, an advanced method of patent analysis, is a useful tool for technology management. This method visually displays all the relationships among the patents and enables the analysts to intuitively comprehend the overview of a set of patents in the field of the technology being studied. Although patent network analysis possesses relative advantages different from traditional methods of patent analysis, it is subject to several crucial limitations. To overcome the drawbacks of the current method, this study proposes a novel patent analysis method, called the intelligent patent network analysis method, to make a visual network with great precision. Based on artificial intelligence techniques, the proposed method provides an automated procedure for searching patent documents, extracting patent keywords, and determining the weight of each patent keyword in order to generate a sophisticated visualization of the patent network. This study proposes a detailed procedure for generating an intelligent patent network that is helpful for improving the efficiency and quality of patent analysis. Furthermore, patents in the field of Carbon Nanotube Backlight Unit (CNT-BLU were analyzed to verify the utility of the proposed method.

  2. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  3. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  4. Reconstruction of source location in a network of gravitational wave interferometric detectors

    International Nuclear Information System (INIS)

    Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Clapson, Andre-Claude; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Leroy, Nicolas; Varvella, Monica

    2006-01-01

    This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a χ 2 minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructed one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky

  5. Binary Classification Method of Social Network Users

    Directory of Open Access Journals (Sweden)

    I. A. Poryadin

    2017-01-01

    Full Text Available The subject of research is a binary classification method of social network users based on the data analysis they have placed. Relevance of the task to gain information about a person by examining the content of his/her pages in social networks is exemplified. The most common approach to its solution is a visual browsing. The order of the regional authority in our country illustrates that its using in school education is needed. The article shows restrictions on the visual browsing of pupil’s pages in social networks as a tool for the teacher and the school psychologist and justifies that a process of social network users’ data analysis should be automated. Explores publications, which describe such data acquisition, processing, and analysis methods and considers their advantages and disadvantages. The article also gives arguments to support a proposal to study the classification method of social network users. One such method is credit scoring, which is used in banks and credit institutions to assess the solvency of clients. Based on the high efficiency of the method there is a proposal for significant expansion of its using in other areas of society. The possibility to use logistic regression as the mathematical apparatus of the proposed method of binary classification has been justified. Such an approach enables taking into account the different types of data extracted from social networks. Among them: the personal user data, information about hobbies, friends, graphic and text information, behaviour characteristics. The article describes a number of existing methods of data transformation that can be applied to solve the problem. An experiment of binary gender-based classification of social network users is described. A logistic model obtained for this example includes multiple logical variables obtained by transforming the user surnames. This experiment confirms the feasibility of the proposed method. Further work is to define a system

  6. Reconstructing consensus Bayesian network structures with application to learning molecular interaction networks

    NARCIS (Netherlands)

    Fröhlich, H.; Klau, G.W.

    2013-01-01

    Bayesian Networks are an established computational approach for data driven network inference. However, experimental data is limited in its availability and corrupted by noise. This leads to an unavoidable uncertainty about the correct network structure. Thus sampling or bootstrap based strategies

  7. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  8. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  9. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  10. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  11. Analysis of the methods and reconstruction of iodine doses from the Chernobyl release in Belarus

    International Nuclear Information System (INIS)

    Lutsko, A.; Krivoruchko, K.; Gribov, A.

    1997-01-01

    The paper considers the method of reconstructing the iodine-131 fallout based on the systematic exposure measurements. The measurements used were taken with standard DP-5 dosimeters at the monitoring sites of the State Hydrometeorological Service network. These data have been collected since the Chernobyl NPP accident. A short-living exponent has been deduced from the exposure dose dying away. Maps of the iodine-131 release in the period of May 1 - 31, 1996 have been constructed in the attempt to estimate the doses for the initial period of the accident. The paper also dwells on the intricacy of the problem and the refinements to be made for the dose commitments with allowance for a continuing release and meteorological changes that are comparable in time with the half-life events of iodine 131 decay. A comparative analysis has been made of various methods of the dose reconstruction. The results obtained are compared with the maps of the increased incidence of the thyroid gland cancer in adults and children in Belarus. (authors) 14 refs., 2 tabs., 4 figs

  12. Methods for Analyzing Pipe Networks

    DEFF Research Database (Denmark)

    Nielsen, Hans Bruun

    1989-01-01

    to formulate the flow equations in terms of pipe discharges than in terms of energy heads. The behavior of some iterative methods is compared in the initial phase with large errors. It is explained why the linear theory method oscillates when the iteration gets close to the solution, and it is further...... demonstrated that this method offers good starting values for a Newton-Raphson iteration....

  13. PROVIDING OF SAFETY AT WORKS IMPLEMENTATION ON RECONSTRUCTION OF PLUMBINGS NETWORKS IN THE STRAITENED TERMS

    Directory of Open Access Journals (Sweden)

    DIDENKO L. M.

    2016-07-01

    Full Text Available Summary. Raising of problem. In all regions of our country plumbings networks have a considerable physical and moral wear, because in the majority they were laid in the middle of the last century. It is known that more than 50 % on-the-road pipelines are made from steel, here middle tenure of employment of metallic pipes for plumbings networks makes 30. [1]. Statistical data testify that more than 34 % plumbings and sewage networks are in the emergency state. Thus, a large enough stake in building industry of Ukraine is on works on the reconstruction of this type of engineering networks. Thus complete replacement of all pipes requires heavy material tolls, a reconstruction and major repairs of separate emergency areas are mainly produced on this account. Logically to assert that providing of safe production of the examined type of works becomes complicated by the presence of harmful and dangerous productive factors arising up due to the complex factor of straitened. This factor is stipulated by that plumbings networks are laid within the limits of folded municipal building and on territory of operating industrial enterprises. About the danger of production of works on a reconstruction the high level of traumatism testifies at their production. According to the law of Ukraine "On a labour (item 13 protection", an employer is under an obligation to create in the workplace the terms of labour accordingly normatively - to the legal acts, requirements of legislation on the observance of rights of workers in area of labour protection. [2] Providing of safety at implementation of works on the reconstruction of plumbings networks, maybe only at the complex going near the study of this problem, that plugs in itself: research of influence of factors of straitened; exposure of features of technology of production building, assembling, breaking-down, earthen and other types of works executable on a site area at a reconstruction; perfection of existent

  14. Linogram and other direct Fourier methods for tomographic reconstruction

    International Nuclear Information System (INIS)

    Magnusson, M.

    1993-01-01

    Computed tomography (CT) is an outstanding break-through in technology as well as in medical diagnostics. The aim in CT is to produce an image with good image quality as fast as possible. The two most well-known methods for CT-reconstruction are the Direct Fourier Method (DFM) and the Filtered Backprojection Method (FBM). This thesis is divided in four parts. In part 1 we give an introduction to the principles of CT as well as a basic treatise of the DFM and the FBM. We also present a short CT history as well as brief descriptions of techniques related to X-ray CT such as SPECT, PET and MRI. Part 2 is devoted to the Linogram Method (LM). The method is presented both intuitively and rigorously and a complete algorithm is given for the discrete case. The implementation has been done using the SNARK subroutine package with various parameters and phantom images. For comparison, the FBM has been applied to the same input projection data. The experiments show that the LM gives almost the same image quality, pixel for pixel, as the FBM. In part 3 we show that the LM is a close relative to the common DFM. We give a new extended explanation of artifacts in DFMs. The source of the problem is twofold: interpolation errors and circular convolution. By identifying the second effect as distinct from the first one, we are able to suggest and verify remedies for the DFM which brings the image quality on par with FBM. One of these remedies is the LM. A slight difficulty with both LM and ordinary DFM techniques is that they require a special projection geometry, whereas most commercial CT-scanners provides fan beam projection data. However, the wanted linogram projection data can be interpolated from fan beam projection data. In part 4, we show that it is possible to obtain good image quality with both LM and DFM techniques using fan beam projection indata. The thesis concludes that the computation cost can be essentially decreased by using LM or other DFMs instead of FBM

  15. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets

    NARCIS (Netherlands)

    Levering, J.; Fiedler, T.; Sieg, A.; van Grinsven, K.W.A.; Hering, S.; Veith, N.; Olivier, B.G.; Klett, L.; Hugenholtz, J.; Teusink, B.; Kreikemeyer, B.; Kummer, U.

    2016-01-01

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes

  16. A simulation of portable PET with a new geometric image reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456 8611 (Japan): Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474 8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644 0023 (Japan)

    2006-12-20

    A new method is proposed for three-dimensional positron emission tomography image reconstruction. The method uses the elementary geometric property of line of response whereby two lines of response, which originate from radioactive isotopes in the same position, lie within a few millimeters distance of each other. The method differs from the filtered back projection method and the iterative reconstruction method. The method is applied to a simulation of portable positron emission tomography.

  17. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    Science.gov (United States)

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Diagnosis method utilizing neural networks

    International Nuclear Information System (INIS)

    Watanabe, K.; Tamayama, K.

    1990-01-01

    Studies have been made on the technique of neural networks, which will be used to identify a cause of a small anomalous state in the reactor coolant system of the ATR (Advance Thermal Reactor). Three phases of analyses were carried out in this study. First, simulation for 100 seconds was made to determine how the plant parameters respond after the occurence of a transient decrease in reactivity, flow rate and temperature of feed water and increase in the steam flow rate and steam pressure, which would produce a decrease of water level in a steam drum of the ATR. Next, the simulation data was analysed utilizing an autoregressive model. From this analysis, a total of 36 coherency functions up to 0.5 Hz in each transient were computed among nine important and detectable plant parameters: neutron flux, flow rate of coolant, steam or feed water, water level in the steam drum, pressure and opening area of control valve in a steam pipe, feed water temperature and electrical power. Last, learning of neural networks composed of 96 input, 4-9 hidden and 5 output layer units was done by use of the generalized delta rule, namely a back-propagation algorithm. These convergent computations were continued as far as the difference between the desired outputs, 1 for direct cause or 0 for four other ones and actual outputs reached less than 10%. (1) Coherency functions were not governed by decreasing rate of reactivity in the range of 0.41x10 -2 dollar/s to 1.62x10 -2 dollar /s or by decreasing depth of the feed water temperature in the range of 3 deg C to 10 deg C or by a change of 10% or less in the three other causes. Change in coherency functions only depended on the type of cause. (2) The direct cause from the other four ones could be discriminated with 0.94+-0.01 of output level. A maximum of 0.06 output height was found among the other four causes. (3) Calculation load which is represented as products of learning times and numbers of the hidden units did not depend on the

  19. Porous media microstructure reconstruction using pixel-based and object-based simulated annealing: comparison with other reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)

    2008-07-01

    The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)

  20. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  1. A combined reconstruction-classification method for diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  2. New method to analyze internal disruptions with tomographic reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Tanzi, C.P. [EURATOM-FOM Association, FOM-Instituut voor Plasmafysica Rijnhuizen, P.O. BOX 1207, 3430 BE Nieuwegein (The Netherlands); de Blank, H.J. [Max-Planck-Institut fuer Plasmaphysik, EURATOM-IPP Association, 85740 Garching (Germany)

    1997-03-01

    Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo {ital et al.}, {ital Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research}, W{umlt u}rzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75{percent} of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. {copyright} {ital 1997 American Institute of Physics.}

  3. New method to analyze internal disruptions with tomographic reconstructions

    International Nuclear Information System (INIS)

    Tanzi, C.P.; de Blank, H.J.

    1997-01-01

    Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo et al., Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research, Wuerzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75% of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. copyright 1997 American Institute of Physics

  4. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  5. Reconstruction of on-axis lensless Fourier transform digital hologram with the screen division method

    Science.gov (United States)

    Jiang, Hongzhen; Liu, Xu; Liu, Yong; Li, Dong; Chen, Zhu; Zheng, Fanglan; Yu, Deqiang

    2017-10-01

    An effective approach for reconstructing on-axis lensless Fourier Transform digital hologram by using the screen division method is proposed. Firstly, the on-axis Fourier Transform digital hologram is divided into sub-holograms. Then the reconstruction result of every sub-hologram is obtained according to the position of corresponding sub-hologram in the hologram reconstruction plane with Fourier transform operation. Finally, the reconstruction image of on-axis Fourier Transform digital hologram can be acquired by the superposition of the reconstruction result of sub-holograms. Compared with the traditional reconstruction method with the phase shifting technology, in which multiple digital holograms are required to record for obtaining the reconstruction image, this method can obtain the reconstruction image with only one digital hologram and therefore greatly simplify the recording and reconstruction process of on-axis lensless Fourier Transform digital holography. The effectiveness of the proposed method is well proved with the experimental results and it will have potential application foreground in the holographic measurement and display field.

  6. Reconstruction method for data protection in telemedicine systems

    Science.gov (United States)

    Buldakova, T. I.; Suyatinov, S. I.

    2015-03-01

    In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.

  7. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  8. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    Science.gov (United States)

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  9. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model

    Directory of Open Access Journals (Sweden)

    Dan Liu

    2018-04-01

    Full Text Available This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN and a continuous pairwise Conditional Random Field (CRF model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  10. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  11. SCNS: a graphical tool for reconstructing executable regulatory networks from single-cell genomic data.

    Science.gov (United States)

    Woodhouse, Steven; Piterman, Nir; Wintersteiger, Christoph M; Göttgens, Berthold; Fisher, Jasmin

    2018-05-25

    Reconstruction of executable mechanistic models from single-cell gene expression data represents a powerful approach to understanding developmental and disease processes. New ambitious efforts like the Human Cell Atlas will soon lead to an explosion of data with potential for uncovering and understanding the regulatory networks which underlie the behaviour of all human cells. In order to take advantage of this data, however, there is a need for general-purpose, user-friendly and efficient computational tools that can be readily used by biologists who do not have specialist computer science knowledge. The Single Cell Network Synthesis toolkit (SCNS) is a general-purpose computational tool for the reconstruction and analysis of executable models from single-cell gene expression data. Through a graphical user interface, SCNS takes single-cell qPCR or RNA-sequencing data taken across a time course, and searches for logical rules that drive transitions from early cell states towards late cell states. Because the resulting reconstructed models are executable, they can be used to make predictions about the effect of specific gene perturbations on the generation of specific lineages. SCNS should be of broad interest to the growing number of researchers working in single-cell genomics and will help further facilitate the generation of valuable mechanistic insights into developmental, homeostatic and disease processes.

  12. Reduction Method for Active Distribution Networks

    DEFF Research Database (Denmark)

    Raboni, Pietro; Chen, Zhe

    2013-01-01

    On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described....

  13. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  14. A hybrid Planning Method for Transmission Network in a Deregulated Enviroment

    DEFF Research Database (Denmark)

    Xu, Zhao; Dong, Zhaoyang; Poulsen, Kit

    2006-01-01

    The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using a multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open-access scheme...

  15. Simple method of modelling of digital holograms registering and their optical reconstruction

    International Nuclear Information System (INIS)

    Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G

    2016-01-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)

  16. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    OpenAIRE

    Sturm , Peter; Maybank , Steve

    1999-01-01

    International audience; We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  17. In Vitro Reconstruction of Neuronal Networks Derived from Human iPS Cells Using Microfabricated Devices.

    Directory of Open Access Journals (Sweden)

    Yuzo Takayama

    Full Text Available Morphology and function of the nervous system is maintained via well-coordinated processes both in central and peripheral nervous tissues, which govern the homeostasis of organs/tissues. Impairments of the nervous system induce neuronal disorders such as peripheral neuropathy or cardiac arrhythmia. Although further investigation is warranted to reveal the molecular mechanisms of progression in such diseases, appropriate model systems mimicking the patient-specific communication between neurons and organs are not established yet. In this study, we reconstructed the neuronal network in vitro either between neurons of the human induced pluripotent stem (iPS cell derived peripheral nervous system (PNS and central nervous system (CNS, or between PNS neurons and cardiac cells in a morphologically and functionally compartmentalized manner. Networks were constructed in photolithographically microfabricated devices with two culture compartments connected by 20 microtunnels. We confirmed that PNS and CNS neurons connected via synapses and formed a network. Additionally, calcium-imaging experiments showed that the bundles originating from the PNS neurons were functionally active and responded reproducibly to external stimuli. Next, we confirmed that CNS neurons showed an increase in calcium activity during electrical stimulation of networked bundles from PNS neurons in order to demonstrate the formation of functional cell-cell interactions. We also confirmed the formation of synapses between PNS neurons and mature cardiac cells. These results indicate that compartmentalized culture devices are promising tools for reconstructing network-wide connections between PNS neurons and various organs, and might help to understand patient-specific molecular and functional mechanisms under normal and pathological conditions.

  18. Phylogenetic comparative methods on phylogenetic networks with reticulations.

    Science.gov (United States)

    Bastide, Paul; Solís-Lemus, Claudia; Kriebel, Ricardo; Sparks, K William; Ané, Cécile

    2018-04-25

    The goal of Phylogenetic Comparative Methods (PCMs) is to study the distribution of quantitative traits among related species. The observed traits are often seen as the result of a Brownian Motion (BM) along the branches of a phylogenetic tree. Reticulation events such as hybridization, gene flow or horizontal gene transfer, can substantially affect a species' traits, but are not modeled by a tree. Phylogenetic networks have been designed to represent reticulate evolution. As they become available for downstream analyses, new models of trait evolution are needed, applicable to networks. One natural extension of the BM is to use a weighted average model for the trait of a hybrid, at a reticulation point. We develop here an efficient recursive algorithm to compute the phylogenetic variance matrix of a trait on a network, in only one preorder traversal of the network. We then extend the standard PCM tools to this new framework, including phylogenetic regression with covariates (or phylogenetic ANOVA), ancestral trait reconstruction, and Pagel's λ test of phylogenetic signal. The trait of a hybrid is sometimes outside of the range of its two parents, for instance because of hybrid vigor or hybrid depression. These two phenomena are rather commonly observed in present-day hybrids. Transgressive evolution can be modeled as a shift in the trait value following a reticulation point. We develop a general framework to handle such shifts, and take advantage of the phylogenetic regression view of the problem to design statistical tests for ancestral transgressive evolution in the evolutionary history of a group of species. We study the power of these tests in several scenarios, and show that recent events have indeed the strongest impact on the trait distribution of present-day taxa. We apply those methods to a dataset of Xiphophorus fishes, to confirm and complete previous analysis in this group. All the methods developed here are available in the Julia package PhyloNetworks.

  19. Geometrical methods for power network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bellucci, Stefano; Tiwari, Bhupendra Nath [Istituto Nazioneale di Fisica Nucleare, Frascati, Rome (Italy). Lab. Nazionali di Frascati; Gupta, Neeraj [Indian Institute of Technology, Kanpur (India). Dept. of Electrical Engineering

    2013-02-01

    Uses advanced geometrical methods to analyse power networks. Provides a self-contained and tutorial introduction. Includes a fully worked-out example for the IEEE 5 bus system. This book is a short introduction to power system planning and operation using advanced geometrical methods. The approach is based on well-known insights and techniques developed in theoretical physics in the context of Riemannian manifolds. The proof of principle and robustness of this approach is examined in the context of the IEEE 5 bus system. This work addresses applied mathematicians, theoretical physicists and power engineers interested in novel mathematical approaches to power network theory.

  20. Skin sparing mastectomy: Technique and suggested methods of reconstruction

    Directory of Open Access Journals (Sweden)

    Ahmed M. Farahat

    2014-09-01

    Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through preservation of the skin envelope of the breast when ever indicated. Our patients can benefit from safe surgery and have good cosmetic outcomeby applying different reconstructive techniques.

  1. Genome-scale reconstruction of the sigma factor network in Escherichia coli: topology and functional states

    DEFF Research Database (Denmark)

    Cho, Byung-Kwan; Kim, Donghyuk; Knight, Eric M.

    2014-01-01

    Background: At the beginning of the transcription process, the RNA polymerase (RNAP) core enzyme requires a sigma-factor to recognize the genomic location at which the process initiates. Although the crucial role of sigma-factors has long been appreciated and characterized for many individual...... to transcription units (TUs), representing an increase of more than 300% over what has been previously reported. The reconstructed network was used to investigate competition between alternative sigma-factors (the sigma(70) and sigma(38) regulons), confirming the competition model of sigma substitution...

  2. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    Science.gov (United States)

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  3. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    Science.gov (United States)

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  4. A shape-based quality evaluation and reconstruction method for electrical impedance tomography

    International Nuclear Information System (INIS)

    Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen; Malmivuo, Jaakko

    2015-01-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community.In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed.Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images. (paper)

  5. Stochastic model and method of zoning water networks

    OpenAIRE

    Тевяшев, Андрей Дмитриевич; Матвиенко, Ольга Ивановна

    2014-01-01

    Water consumption at different time of the day is uneven. The model of steady flow distribution in water-supply networks is calculated for maximum consumption and effectively used in the network design and reconstruction. Quasi-stationary modes, in which the parameters are random variables and vary relative to their mean values are more suitable for operational management and planning of rational network operation modes.Leaks, which sometimes exceed 50 % of the volume of water supplied, are o...

  6. Differential reconstructed gene interaction networks for deriving toxicity threshold in chemical risk assessment.

    Science.gov (United States)

    Yang, Yi; Maxwell, Andrew; Zhang, Xiaowei; Wang, Nan; Perkins, Edward J; Zhang, Chaoyang; Gong, Ping

    2013-01-01

    Pathway alterations reflected as changes in gene expression regulation and gene interaction can result from cellular exposure to toxicants. Such information is often used to elucidate toxicological modes of action. From a risk assessment perspective, alterations in biological pathways are a rich resource for setting toxicant thresholds, which may be more sensitive and mechanism-informed than traditional toxicity endpoints. Here we developed a novel differential networks (DNs) approach to connect pathway perturbation with toxicity threshold setting. Our DNs approach consists of 6 steps: time-series gene expression data collection, identification of altered genes, gene interaction network reconstruction, differential edge inference, mapping of genes with differential edges to pathways, and establishment of causal relationships between chemical concentration and perturbed pathways. A one-sample Gaussian process model and a linear regression model were used to identify genes that exhibited significant profile changes across an entire time course and between treatments, respectively. Interaction networks of differentially expressed (DE) genes were reconstructed for different treatments using a state space model and then compared to infer differential edges/interactions. DE genes possessing differential edges were mapped to biological pathways in databases such as KEGG pathways. Using the DNs approach, we analyzed a time-series Escherichia coli live cell gene expression dataset consisting of 4 treatments (control, 10, 100, 1000 mg/L naphthenic acids, NAs) and 18 time points. Through comparison of reconstructed networks and construction of differential networks, 80 genes were identified as DE genes with a significant number of differential edges, and 22 KEGG pathways were altered in a concentration-dependent manner. Some of these pathways were perturbed to a degree as high as 70% even at the lowest exposure concentration, implying a high sensitivity of our DNs approach

  7. Snapshot of iron response in Shewanella oneidensis by gene network reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yunfeng; Harris, Daniel P.; Luo, Feng; Xiong, Wenlu; Joachimiak, Marcin; Wu, Liyou; Dehal, Paramvir; Jacobsen, Janet; Yang, Zamin; Palumbo, Anthony V.; Arkin, Adam P.; Zhou, Jizhong

    2008-10-09

    Background: Iron homeostasis of Shewanella oneidensis, a gamma-proteobacterium possessing high iron content, is regulated by a global transcription factor Fur. However, knowledge is incomplete about other biological pathways that respond to changes in iron concentration, as well as details of the responses. In this work, we integrate physiological, transcriptomics and genetic approaches to delineate the iron response of S. oneidensis. Results: We show that the iron response in S. oneidensis is a rapid process. Temporal gene expression profiles were examined for iron depletion and repletion, and a gene co-expression network was reconstructed. Modules of iron acquisition systems, anaerobic energy metabolism and protein degradation were the most noteworthy in the gene network. Bioinformatics analyses suggested that genes in each of the modules might be regulated by DNA-binding proteins Fur, CRP and RpoH, respectively. Closer inspection of these modules revealed a transcriptional regulator (SO2426) involved in iron acquisition and ten transcriptional factors involved in anaerobic energy metabolism. Selected genes in the network were analyzed by genetic studies. Disruption of genes encoding a putative alcaligin biosynthesis protein (SO3032) and a gene previously implicated in protein degradation (SO2017) led to severe growth deficiency under iron depletion conditions. Disruption of a novel transcriptional factor (SO1415) caused deficiency in both anaerobic iron reduction and growth with thiosulfate or TMAO as an electronic acceptor, suggesting that SO1415 is required for specific branches of anaerobic energy metabolism pathways. Conclusions: Using a reconstructed gene network, we identified major biological pathways that were differentially expressed during iron depletion and repletion. Genetic studies not only demonstrated the importance of iron acquisition and protein degradation for iron depletion, but also characterized a novel transcriptional factor (SO1415) with a

  8. Method Accelerates Training Of Some Neural Networks

    Science.gov (United States)

    Shelton, Robert O.

    1992-01-01

    Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.

  9. Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction

    Science.gov (United States)

    Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan

    2009-01-01

    Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and

  10. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    Directory of Open Access Journals (Sweden)

    Songjun Zeng

    2010-01-01

    Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.

  11. Hybrid recommendation methods in complex networks.

    Science.gov (United States)

    Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N

    2015-07-01

    We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.

  12. The feasibility of images reconstructed with the method of sieves

    International Nuclear Information System (INIS)

    Veklerov, E.; Llacer, J.

    1990-01-01

    The concept of sieves has been applied with the maximum likelihood estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered

  13. IdentiCS – Identification of coding sequence and in silico reconstruction of the metabolic network directly from unannotated low-coverage bacterial genome sequence

    Directory of Open Access Journals (Sweden)

    Zeng An-Ping

    2004-08-01

    Full Text Available Abstract Background A necessary step for a genome level analysis of the cellular metabolism is the in silico reconstruction of the metabolic network from genome sequences. The available methods are mainly based on the annotation of genome sequences including two successive steps, the prediction of coding sequences (CDS and their function assignment. The annotation process takes time. The available methods often encounter difficulties when dealing with unfinished error-containing genomic sequence. Results In this work a fast method is proposed to use unannotated genome sequence for predicting CDSs and for an in silico reconstruction of metabolic networks. Instead of using predicted genes or CDSs to query public databases, entries from public DNA or protein databases are used as queries to search a local database of the unannotated genome sequence to predict CDSs. Functions are assigned to the predicted CDSs simultaneously. The well-annotated genome of Salmonella typhimurium LT2 is used as an example to demonstrate the applicability of the method. 97.7% of the CDSs in the original annotation are correctly identified. The use of SWISS-PROT-TrEMBL databases resulted in an identification of 98.9% of CDSs that have EC-numbers in the published annotation. Furthermore, two versions of sequences of the bacterium Klebsiella pneumoniae with different genome coverage (3.9 and 7.9 fold, respectively are examined. The results suggest that a 3.9-fold coverage of the bacterial genome could be sufficiently used for the in silico reconstruction of the metabolic network. Compared to other gene finding methods such as CRITICA our method is more suitable for exploiting sequences of low genome coverage. Based on the new method, a program called IdentiCS (Identification of Coding Sequences from Unfinished Genome Sequences is delivered that combines the identification of CDSs with the reconstruction, comparison and visualization of metabolic networks (free to download

  14. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    International Nuclear Information System (INIS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-01-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)

  15. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Science.gov (United States)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  16. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  17. Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics

    Science.gov (United States)

    Chen, Yu-Zhong; Lai, Ying-Cheng

    2018-03-01

    Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.

  18. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.

    Science.gov (United States)

    Kang, Eunhee; Min, Junhong; Ye, Jong Chul

    2017-10-01

    Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from

  19. Reconstruction of neutron spectra using neural networks starting from the Bonner spheres spectrometric system

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Arteaga A, T.; Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2005-01-01

    The artificial neural networks (RN) have been used successfully to solve a wide variety of problems. However to determine an appropriate set of values of the structural parameters and of learning of these, it continues being even a difficult task. Contrary to previous works, here a set of neural networks is designed to reconstruct neutron spectra starting from the counting rates coming from the detectors of the Bonner spheres system, using a systematic and experimental strategy for the robust design of multilayer neural networks of the feed forward type of inverse propagation. The robust design is formulated as a design problem of Taguchi parameters. It was selected a set of 53 neutron spectra, compiled by the International Atomic Energy Agency, the counting rates were calculated that would take place in a Bonner spheres system, the set was arranged according to the wave form of those spectra. With these data and applying the Taguchi methodology to determine the best parameters of the network topology, it was trained and it proved the same one with the spectra. (Author)

  20. Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin

    International Nuclear Information System (INIS)

    Downey, Austin; Laflamme, Simon; Ubertini, Filippo

    2016-01-01

    The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces. (paper)

  1. Continental-Scale Temperature Reconstructions from the PAGES 2k Network

    Science.gov (United States)

    Kaufman, D. S.

    2012-12-01

    We present a major new synthesis of seven regional temperature reconstructions to elucidate the global pattern of variations and their association with climate-forcing mechanisms over the past two millennia. To coordinate the integration of new and existing data of all proxy types, the Past Global Changes (PAGES) project developed the 2k Network. It comprises nine working groups representing eight continental-scale regions and the oceans. The PAGES 2k Consortium, authoring this paper, presently includes 79 representatives from 25 countries. For this synthesis, each of the PAGES 2k working groups identified the proxy climate records for reconstructing past temperature and associated uncertainty using the data and methodologies that they deemed most appropriate for their region. The datasets are from 973 sites where tree rings, pollen, corals, lake and marine sediment, glacier ice, speleothems, and historical documents record changes in biologically and physically mediated processes that are sensitive to temperature change, among other climatic factors. The proxy records used for this synthesis are available through the NOAA World Data Center for Paleoclimatology. On long time scales, the temperature reconstructions display similarities among regions, and a large part of this common behavior can be explained by known climate forcings. Reconstructed temperatures in all regions show an overall long-term cooling trend until around 1900 C.E., followed by strong warming during the 20th century. On the multi-decadal time scale, we assessed the variability among the temperature reconstructions using principal component (PC) analysis of the standardized decadal mean temperatures over the period of overlap among the reconstructions (1200 to 1980 C.E.). PC1 explains 35% of the total variability and is strongly correlated with temperature reconstructions from the four Northern Hemisphere regions, and with the sum of external forcings including solar, volcanic, and greenhouse

  2. Direct reconstruction of pharmacokinetic parameters in dynamic fluorescence molecular tomography by the augmented Lagrangian method

    Science.gov (United States)

    Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing

    2016-03-01

    Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.

  3. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  4. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    Science.gov (United States)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  5. Image Reconstruction Based on Homotopy Perturbation Inversion Method for Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available The image reconstruction for electrical impedance tomography (EIT mathematically is a typed nonlinear ill-posed inverse problem. In this paper, a novel iteration regularization scheme based on the homotopy perturbation technique, namely, homotopy perturbation inversion method, is applied to investigate the EIT image reconstruction problem. To verify the feasibility and effectiveness, simulations of image reconstruction have been performed in terms of considering different locations, sizes, and numbers of the inclusions, as well as robustness to data noise. Numerical results indicate that this method can overcome the numerical instability and is robust to data noise in the EIT image reconstruction. Moreover, compared with the classical Landweber iteration method, our approach improves the convergence rate. The results are promising.

  6. Gap-filling analysis of the iJO1366 Escherichia coli metabolic network reconstruction for discovery of metabolic functions

    Directory of Open Access Journals (Sweden)

    Orth Jeffrey D

    2012-05-01

    Full Text Available Abstract Background The iJO1366 reconstruction of the metabolic network of Escherichia coli is one of the most complete and accurate metabolic reconstructions available for any organism. Still, because our knowledge of even well-studied model organisms such as this one is incomplete, this network reconstruction contains gaps and possible errors. There are a total of 208 blocked metabolites in iJO1366, representing gaps in the network. Results A new model improvement workflow was developed to compare model based phenotypic predictions to experimental data to fill gaps and correct errors. A Keio Collection based dataset of E. coli gene essentiality was obtained from literature data and compared to model predictions. The SMILEY algorithm was then used to predict the most likely missing reactions in the reconstructed network, adding reactions from a KEGG based universal set of metabolic reactions. The feasibility of these putative reactions was determined by comparing updated versions of the model to the experimental dataset, and genes were predicted for the most feasible reactions. Conclusions Numerous improvements to the iJO1366 metabolic reconstruction were suggested by these analyses. Experiments were performed to verify several computational predictions, including a new mechanism for growth on myo-inositol. The other predictions made in this study should be experimentally verifiable by similar means. Validating all of the predictions made here represents a substantial but important undertaking.

  7. An External Wire Frame Fixation Method of Skin Grafting for Burn Reconstruction.

    Science.gov (United States)

    Yoshino, Yukiko; Ueda, Hyakuzoh; Ono, Simpei; Ogawa, Rei

    2017-06-28

    The skin graft is a prevalent reconstructive method for burn injuries. We have been applying external wire frame fixation methods in combination with skin grafts since 1986 and have experienced better outcomes in percentage of successful graft take. The overall purpose of this method was to further secure skin graft adherence to wound beds in hard to stabilize areas. There are also location-specific benefits to this technique such as eliminating the need of tarsorrhaphy in periorbital area, allowing immediate food intake after surgery in perioral area, and performing less invasive fixing methods in digits, and so on. The purpose of this study was to clarify its benefits and applicable locations. We reviewed 22 postburn patients with skin graft reconstructions using the external wire frame method at our institution from December 2012 through September 2016. Details of the surgical technique and individual reports are also discussed. Of the 22 cases, 15 (68%) were split-thickness skin grafts and 7 (32%) were full-thickness skin grafts. Five cases (23%) involved periorbital reconstruction, 5 (23%) involved perioral reconstruction, 2 (9%) involved lower limb reconstruction, and 10 (45%) involved digital reconstruction. Complete (100%) survival of the skin graft was attained in all cases. No signs of complication were observed. With 30 years of experiences all combined, we have summarized fail-proof recommendations to a successful graft survival with an emphasis on the locations of its application.

  8. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    Science.gov (United States)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  9. Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.

    Science.gov (United States)

    Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F

    2015-05-01

    Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method

  10. On the kinematic reconstruction of deep inelastic scattering at HERA: the Σmethod

    International Nuclear Information System (INIS)

    Bassler, U.; Bernardi, G.

    1994-12-01

    We review and compare the reconstruction methods of the inclusive deep inelastic scattering variables used at HERA. We introduce a new prescription, the Sigma (Σ) method, which allows to measure the structure function of the proton F 2 (x, Q 2 ) in a large kinematic domain, and in particular in the low x-low Q 2 region, with small systematic errors and small radiative corrections. A detailed comparison between the Σ method and the other methods is shown. Extensions of the Σ method are presented. The effect of QED radiation on the kinematic reconstruction and on the structure function measurement is discussed. (orig.)

  11. A volume of fluid method based on multidimensional advection and spline interface reconstruction

    International Nuclear Information System (INIS)

    Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.

    2004-01-01

    A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps

  12. Accelerated gradient methods for total-variation-based CT image reconstruction

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian

    2011-01-01

    incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...... reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...

  13. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  14. Design of an artificial neural network, with the topology oriented to the reconstruction of neutron spectra

    International Nuclear Information System (INIS)

    Arteaga A, T.; Ortiz R, J.M.; Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado S, G.A.

    2006-01-01

    People that live in high places respect to the sea level, in latitudes far from the equator or that they travel by plane, they are exposed to atmospheres of high radiation generated by the cosmic rays. Another atmosphere with radiation is the medical equipment, particle accelerators and nuclear reactors. The evaluation of the biological risk for neutron radiation requires an appropriate and sure dosimetry. A commonly used system is the Bonner Sphere Spectrometer (EEB) with the purpose of reconstructing the spectrum that is important because the equivalent dose for neutrons depends strongly on its energy. The count rates obtained in each sphere are treated, in most of the cases, for iterative methods, Monte Carlo or Maximum Entropy. Each one of them has difficulties that it motivates to the development of complementary procedures. Recently it has been used Artificial Neural Networks, ANN) and not yet conclusive results have been obtained. In this work it was designed an ANN to obtain the neutron energy spectrum neutrons starting from the counting rate of count of an EEB. The ANN was trained with 129 reference spectra obtained of the IAEA (1990, 2001), 24 were built as defined energy, including isotopic sources of neutrons of reference and operational, of accelerators, reactors, mathematical functions, and of defined energy with several peaks. The spectrum was transformed from lethargy units to energy and were reaccommodated in 31 energies using the Monte Carlo code 4C. The reaccommodated spectra and the response matrix UTA4 were used to calculate the prospective count rates in the EEB. These rates were used as entrance and its respective spectrum was used as output during the net training. The net design is Retropropagation type with 5 layers of 7, 140, 140, 140 and 31 neurons, transfer function logsig, tansig, logsig, logsig, logsig respectively. Training algorithm, traingdx. After the training, the net was proven with a group of training spectra and others that

  15. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    Science.gov (United States)

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  16. Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states

    Directory of Open Access Journals (Sweden)

    Grünewald Stefan

    2011-01-01

    Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction

  17. A two-step Hilbert transform method for 2D image reconstruction

    International Nuclear Information System (INIS)

    Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D

    2004-01-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained

  18. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  19. The transformation of trust in China's alternative food networks: disruption, reconstruction, and development

    Directory of Open Access Journals (Sweden)

    Raymond Yu. Wang

    2015-06-01

    Full Text Available Food safety issues in China have received much scholarly attention, yet few studies systematically examined this matter through the lens of trust. More importantly, little is known about the transformation of different types of trust in the dynamic process of food production, provision, and consumption. We consider trust as an evolving interdependent relationship between different actors. We used the Beijing County Fair, a prominent ecological farmers' market in China, as an example to examine the transformation of trust in China's alternative food networks. We argue that although there has been a disruption of institutional trust among the general public since 2008 when the melamine-tainted milk scandal broke out, reconstruction of individual trust and development of organizational trust have been observed, along with the emergence and increasing popularity of alternative food networks. Based on more than six months of fieldwork on the emerging ecological agriculture sector in 13 provinces across China as well as monitoring of online discussions and posts, we analyze how various social factors - including but not limited to direct and indirect reciprocity, information, endogenous institutions, and altruism - have simultaneously contributed to the transformation of trust in China's alternative food networks. The findings not only complement current social theories of trust, but also highlight an important yet understudied phenomenon whereby informal social mechanisms have been partially substituting for formal institutions and gradually have been building trust against the backdrop of the food safety crisis in China.

  20. Spectral Analysis Methods of Social Networks

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2017-01-01

    Full Text Available Online social networks (such as Facebook, Twitter, VKontakte, etc. being an important channel for disseminating information are often used to arrange an impact on the social consciousness for various purposes - from advertising products or services to the full-scale information war thereby making them to be a very relevant object of research. The paper reviewed the analysis methods of social networks (primarily, online, based on the spectral theory of graphs. Such methods use the spectrum of the social graph, i.e. a set of eigenvalues of its adjacency matrix, and also the eigenvectors of the adjacency matrix.Described measures of centrality (in particular, centrality based on the eigenvector and PageRank, which reflect a degree of impact one or another user of the social network has. A very popular PageRank measure uses, as a measure of centrality, the graph vertices, the final probabilities of the Markov chain, whose matrix of transition probabilities is calculated on the basis of the adjacency matrix of the social graph. The vector of final probabilities is an eigenvector of the matrix of transition probabilities.Presented a method of dividing the graph vertices into two groups. It is based on maximizing the network modularity by computing the eigenvector of the modularity matrix.Considered a method for detecting bots based on the non-randomness measure of a graph to be computed using the spectral coordinates of vertices - sets of eigenvector components of the adjacency matrix of a social graph.In general, there are a number of algorithms to analyse social networks based on the spectral theory of graphs. These algorithms show very good results, but their disadvantage is the relatively high (albeit polynomial computational complexity for large graphs.At the same time it is obvious that the practical application capacity of the spectral graph theory methods is still underestimated, and it may be used as a basis to develop new methods.The work

  1. Evaluation of image reconstruction methods for 123I-MIBG-SPECT. A rank-order study

    International Nuclear Information System (INIS)

    Soederberg, Marcus; Mattsson, Soeren; Oddstig, Jenny; Uusijaervi-Lizana, Helena; Leide-Svegborn, Sigrid; Valind, Sven; Thorsson, Ola; Garpered, Sabine; Prautzsch, Tilmann; Tischenko, Oleg

    2012-01-01

    Background: There is an opportunity to improve the image quality and lesion detectability in single photon emission computed tomography (SPECT) by choosing an appropriate reconstruction method and optimal parameters for the reconstruction. Purpose: To optimize the use of the Flash 3D reconstruction algorithm in terms of equivalent iteration (EI) number (number of subsets times the number of iterations) and to compare with two recently developed reconstruction algorithms ReSPECT and orthogonal polynomial expansion on disc (OPED) for application on 123 I-metaiodobenzylguanidine (MIBG)-SPECT. Material and Methods: Eleven adult patients underwent SPECT 4 h and 14 patients 24 h after injection of approximately 200 MBq 123 I-MIBG using a Siemens Symbia T6 SPECT/CT. Images were reconstructed from raw data using the Flash 3D algorithm at eight different EI numbers. The images were ranked by three experienced nuclear medicine physicians according to their overall impression of the image quality. The obtained optimal images were then compared in one further visual comparison with images reconstructed using the ReSPECT and OPED algorithms. Results: The optimal EI number for Flash 3D was determined to be 32 for acquisition 4 h and 24 h after injection. The average rank order (best first) for the different reconstructions for acquisition after 4 h was: Flash 3D 32 > ReSPECT > Flash 3D 64 > OPED, and after 24 h: Flash 3D 16 > ReSPECT > Flash 3D 32 > OPED. A fair level of inter-observer agreement concerning optimal EI number and reconstruction algorithm was obtained, which may be explained by the different individual preferences of what is appropriate image quality. Conclusion: Using Siemens Symbia T6 SPECT/CT and specified acquisition parameters, Flash 3D 32 (4 h) and Flash 3D 16 (24 h), followed by ReSPECT, were assessed to be the preferable reconstruction algorithms in visual assessment of 123 I-MIBG images

  2. A simple method to take urethral sutures for neobladder reconstruction and radical prostatectomy

    Directory of Open Access Journals (Sweden)

    B Satheesan

    2007-01-01

    Full Text Available For the reconstruction of urethra-vesical anastamosis after radical prostatectomy and for neobladder reconstruction, taking adequate sutures to include the urethral mucosa is vital. Due to the retraction of the urethra and unfriendly pelvis, the process of taking satisfactory urethral sutures may be laborious. Here, we describe a simple method by which we could overcome similar technical problems during surgery using Foley catheter as the guide for the suture.

  3. A Modified Method for Reconstruction of Chronic Rupture of the Quadriceps Tendon after Total Knee Replacement

    Directory of Open Access Journals (Sweden)

    S Singh

    2008-11-01

    Full Text Available We describe herein a modified technique for reconstruction of chronic rupture of the quadriceps tendon in a patient with bilateral total knee replacement and distal realignment of the patella. The surgery involved the application of a Dacron graft and the ‘double eights’ technique. The patient achieved satisfactory results after surgery and we believe that this technique of reconstruction offers advantages over other methods.

  4. Comparing and improving reconstruction methods for proxies based on compositional data

    Science.gov (United States)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  5. Driver Injury Risk Variability in Finite Element Reconstructions of Crash Injury Research and Engineering Network (CIREN) Frontal Motor Vehicle Crashes.

    Science.gov (United States)

    Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D

    2015-01-01

    A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2

  6. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    Science.gov (United States)

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Analysis of fracture surface of CFRP material by three-dimensional reconstruction methods

    International Nuclear Information System (INIS)

    Lobo, Raquel M.; Andrade, Arnaldo H.P.

    2009-01-01

    Fracture surfaces of CFRP (carbon Fiber Reinforced Polymer) materials, used in the nuclear fuel cycle, presents an elevated roughness, mainly due to the fracture mode known as pulling out, that displays pieces of carbon fibers after debonding between fiber and matrix. The fractographic analysis, by bi-dimensional images is deficient for not considering the so important vertical resolution as much as the horizontal resolution. In this case, the knowledge of this heights distribution that occurs during the breaking, can lead to the calculation of the involved energies in the process that would allows a better agreement on the fracture mechanisms of the composite material. An important solution for the material characterization, whose surface presents a high roughness due to the variation in height, is to reconstruct three-dimensionally these fracture surfaces. In this work, the 3D reconstruction was done by two different methods: the variable focus reconstruction, through a stack of images obtained by optical microscopy (OM) and the parallax reconstruction, carried through with images acquired by scanning electron microscopy (SEM). The results of both methods present an elevation map of the reconstructed image that determine the height of the surface pixel by pixel,. The results obtained by the methods of reconstruction for the CFRP surfaces, have been compared with others materials such as aluminum and copper that present a ductile type fracture surface, with lower roughness. (author)

  8. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  9. Novel Plasmodium falciparum metabolic network reconstruction identifies shifts associated with clinical antimalarial resistance.

    Science.gov (United States)

    Carey, Maureen A; Papin, Jason A; Guler, Jennifer L

    2017-07-19

    Malaria remains a major public health burden and resistance has emerged to every antimalarial on the market, including the frontline drug, artemisinin. Our limited understanding of Plasmodium biology hinders the elucidation of resistance mechanisms. In this regard, systems biology approaches can facilitate the integration of existing experimental knowledge and further understanding of these mechanisms. Here, we developed a novel genome-scale metabolic network reconstruction, iPfal17, of the asexual blood-stage P. falciparum parasite to expand our understanding of metabolic changes that support resistance. We identified 11 metabolic tasks to evaluate iPfal17 performance. Flux balance analysis and simulation of gene knockouts and enzyme inhibition predict candidate drug targets unique to resistant parasites. Moreover, integration of clinical parasite transcriptomes into the iPfal17 reconstruction reveals patterns associated with antimalarial resistance. These results predict that artemisinin sensitive and resistant parasites differentially utilize scavenging and biosynthetic pathways for multiple essential metabolites, including folate and polyamines. Our findings are consistent with experimental literature, while generating novel hypotheses about artemisinin resistance and parasite biology. We detect evidence that resistant parasites maintain greater metabolic flexibility, perhaps representing an incomplete transition to the metabolic state most appropriate for nutrient-rich blood. Using this systems biology approach, we identify metabolic shifts that arise with or in support of the resistant phenotype. This perspective allows us to more productively analyze and interpret clinical expression data for the identification of candidate drug targets for the treatment of resistant parasites.

  10. AIR Tools II: algebraic iterative reconstruction methods, improved implementation

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jørgensen, Jakob Sauer

    2017-01-01

    with algebraic iterative methods and their convergence properties. The present software is a much expanded and improved version of the package AIR Tools from 2012, based on a new modular design. In addition to improved performance and memory use, we provide more flexible iterative methods, a column-action method...

  11. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography

    International Nuclear Information System (INIS)

    Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-01-01

    Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity

  12. On-line reconstruction of in-core power distribution by harmonics expansion method

    International Nuclear Information System (INIS)

    Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping

    2011-01-01

    Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.

  13. Reconstruction and in silico analysis of metabolic network for an oleaginous yeast, Yarrowia lipolytica.

    Directory of Open Access Journals (Sweden)

    Pengcheng Pan

    Full Text Available With the emergence of energy scarcity, the use of renewable energy sources such as biodiesel is becoming increasingly necessary. Recently, many researchers have focused their minds on Yarrowia lipolytica, a model oleaginous yeast, which can be employed to accumulate large amounts of lipids that could be further converted to biodiesel. In order to understand the metabolic characteristics of Y. lipolytica at a systems level and to examine the potential for enhanced lipid production, a genome-scale compartmentalized metabolic network was reconstructed based on a combination of genome annotation and the detailed biochemical knowledge from multiple databases such as KEGG, ENZYME and BIGG. The information about protein and reaction associations of all the organisms in KEGG and Expasy-ENZYME database was arranged into an EXCEL file that can then be regarded as a new useful database to generate other reconstructions. The generated model iYL619_PCP accounts for 619 genes, 843 metabolites and 1,142 reactions including 236 transport reactions, 125 exchange reactions and 13 spontaneous reactions. The in silico model successfully predicted the minimal media and the growing abilities on different substrates. With flux balance analysis, single gene knockouts were also simulated to predict the essential genes and partially essential genes. In addition, flux variability analysis was applied to design new mutant strains that will redirect fluxes through the network and may enhance the production of lipid. This genome-scale metabolic model of Y. lipolytica can facilitate system-level metabolic analysis as well as strain development for improving the production of biodiesels and other valuable products by Y. lipolytica and other closely related oleaginous yeasts.

  14. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  15. Methods and Simulations of Muon Tomography and Reconstruction

    Science.gov (United States)

    Schreiner, Henry Fredrick, III

    This dissertation investigates imaging with cosmic ray muons using scintillator-based portable particle detectors, and covers a variety of the elements required for the detectors to operate and take data, from the detector internal communications and software algorithms to a measurement to allow accurate predictions of the attenuation of physical targets. A discussion of the tracking process for the three layer helical design developed at UT Austin is presented, with details of the data acquisition system, and the highly efficient data format. Upgrades to this system provide a stable system for taking images in harsh or inaccessible environments, such as in a remote jungle in Belize. A Geant4 Monte Carlo simulation was used to develop our understanding of the efficiency of the system, as well as to make predictions for a variety of different targets. The projection process is discussed, with a high-speed algorithm for sweeping a plane through data in near real time, to be used in applications requiring a search through space for target discovery. Several other projections and a foundation of high fidelity 3D reconstructions are covered. A variable binning scheme for rapidly varying statistics over portions of an image plane is also presented and used. A discrepancy in our predictions and the observed attenuation through smaller targets is shown, and it is resolved with a new measurement of low energy spectrum, using a specially designed enclosure to make a series of measurements underwater. This provides a better basis for understanding the images of small amounts of materials, such as for thin cover materials.

  16. Multiscale Pore Throat Network Reconstruction of Tight Porous Media Constrained by Mercury Intrusion Capillary Pressure and Nuclear Magnetic Resonance Measurements

    Science.gov (United States)

    Xu, R.; Prodanovic, M.

    2017-12-01

    Due to the low porosity and permeability of tight porous media, hydrocarbon productivity strongly depends on the pore structure. Effective characterization of pore/throat sizes and reconstruction of their connectivity in tight porous media remains challenging. Having a representative pore throat network, however, is valuable for calculation of other petrophysical properties such as permeability, which is time-consuming and costly to obtain by experimental measurements. Due to a wide range of length scales encountered, a combination of experimental methods is usually required to obtain a comprehensive picture of the pore-body and pore-throat size distributions. In this work, we combine mercury intrusion capillary pressure (MICP) and nuclear magnetic resonance (NMR) measurements by percolation theory to derive pore-body size distribution, following the work by Daigle et al. (2015). However, in their work, the actual pore-throat sizes and the distribution of coordination numbers are not well-defined. To compensate for that, we build a 3D unstructured two-scale pore throat network model initialized by the measured porosity and the calculated pore-body size distributions, with a tunable pore-throat size and coordination number distribution, which we further determine by matching the capillary pressure vs. saturation curve from MICP measurement, based on the fact that the mercury intrusion process is controlled by both the pore/throat size distributions and the connectivity of the pore system. We validate our model by characterizing several core samples from tight Middle East carbonate, and use the network model to predict the apparent permeability of the samples under single phase fluid flow condition. Results show that the permeability we get is in reasonable agreement with the Coreval experimental measurements. The pore throat network we get can be used to further calculate relative permeability curves and simulate multiphase flow behavior, which will provide valuable

  17. A resolution-enhancing image reconstruction method for few-view differential phase-contrast tomography

    Science.gov (United States)

    Guan, Huifeng; Anastasio, Mark A.

    2017-03-01

    It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.

  18. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    DEFF Research Database (Denmark)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...

  19. Fast multiview three-dimensional reconstruction method using cost volume filtering

    Science.gov (United States)

    Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.

    2014-03-01

    As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.

  20. A reliability index for assessment of crack profile reconstructed from ECT signals using a neural-network approach

    International Nuclear Information System (INIS)

    Yusa, Noritaka; Chen, Zhenmao; Miya, Kenzo; Cheng, Weiying

    2002-01-01

    This paper proposes a reliability parameter to enhance an version scheme developed by authors. The scheme is based upon an artificial neural network that simulates mapping between eddy current signals and crack profiles. One of the biggest advantages of the scheme is that it can deal with conductive cracks, which is necessary to reconstruct natural cracks. However, it has one significant disadvantage: the reliability of reconstructed profiles was unknown. The parameter provides an index for assessment of the crack profile and overcomes this disadvantage. After the parameter is validated by reconstruction of simulated cracks, it is applied to reconstruction of natural cracks that occurred in steam generator tubes of a pressurized water reactor. It is revealed that the parameter is applicable to not only simulated cracks but also natural ones. (author)

  1. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    Energy Technology Data Exchange (ETDEWEB)

    Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)

    2012-06-15

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the

  2. A low error reconstruction method for confocal holography to determine 3-dimensional properties

    International Nuclear Information System (INIS)

    Jacquemin, P.B.; Herring, R.A.

    2012-01-01

    A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary

  3. Lower Lip Reconstruction after Tumor Resection; a Single Author's Experience with Various Methods

    International Nuclear Information System (INIS)

    Rifaat, M.A.

    2006-01-01

    Background: Squamous cell carcinoma is the most frequently seen malignant tumor of the lower lip The more tissue is lost from the lip after tumor resection, the more challenging is the reconstruction. Many methods have been described, but each has its own advantages and its disadvantages. The author presents through his own clinical experience with lower lip reconstruction at tbe NCI, an evaluation of the commonly practiced techniques. Patients and Methods: Over a 3 year period from May 2002 till May 2005, 17 cases presented at the National Cancer Institute, Cairo University, with lower lip squamous cell carcinoma. The lesions involved various regions of the lower lip excluding the commissures. Following resection, the resulting defects ranged from 1/3 of lip to total lip loss. The age of the patients ranged from 28 to 67 years and they were 13 males and 4 females With regards to the reconstructive procedures used, Karapandzic technique (orbicularis oris myocutaneous flaps) was used in 7 patients, 3 of whom underwent secondary lower lip augmentation with upper lip switch flaps Primary Abbe (Lip switch) nap reconstruction was used in two patients, while 2 other patients were reconstructed with bilateral fan flaps with vermilion reconstruction by mucosal advancement in one case and tongue flap in the other The radial forearm free nap was used only in 2 cases, and direct wound closure was achieved in three cases. All patients were evaluated for early postoperative results emphasizing on flap viability and wound problems and for late results emphasizing on oral continence, microstomia, and aesthetic outcome, in addition to the usual oncological follow-up. Results: All flaps used in this study survived completely including the 2 free flaps. In the early postoperative period, minor wound breakdown occurred in all three cases reconstructed by utilizing adjacent cheek skin flaps, but all wounds healed spontaneously. The latter three cases Involved defects greater than 2

  4. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  5. Reconstruction and Analysis of Human Kidney-Specific Metabolic Network Based on Omics Data

    Directory of Open Access Journals (Sweden)

    Ai-Di Zhang

    2013-01-01

    Full Text Available With the advent of the high-throughput data production, recent studies of tissue-specific metabolic networks have largely advanced our understanding of the metabolic basis of various physiological and pathological processes. However, for kidney, which plays an essential role in the body, the available kidney-specific model remains incomplete. This paper reports the reconstruction and characterization of the human kidney metabolic network based on transcriptome and proteome data. In silico simulations revealed that house-keeping genes were more essential than kidney-specific genes in maintaining kidney metabolism. Importantly, a total of 267 potential metabolic biomarkers for kidney-related diseases were successfully explored using this model. Furthermore, we found that the discrepancies in metabolic processes of different tissues are directly corresponding to tissue's functions. Finally, the phenotypes of the differentially expressed genes in diabetic kidney disease were characterized, suggesting that these genes may affect disease development through altering kidney metabolism. Thus, the human kidney-specific model constructed in this study may provide valuable information for the metabolism of kidney and offer excellent insights into complex kidney diseases.

  6. Comprehensive Reconstruction and Visualization of Non-Coding Regulatory Networks in Human

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape. PMID:25540777

  7. Deep Convolutional Networks for Event Reconstruction and Particle Tagging on NOvA and DUNE

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep Convolutional Neural Networks (CNNs) have been widely applied in computer vision to solve complex problems in image recognition and analysis. In recent years many efforts have emerged to extend the use of this technology to HEP applications, including the Convolutional Visual Network (CVN), our implementation for identification of neutrino events. In this presentation I will describe the core concepts of CNNs, the details of our particular implementation in the Caffe framework and our application to identify NOvA events. NOvA is a long baseline neutrino experiment whose main goal is the measurement of neutrino oscillations. This relies on the accurate identification and reconstruction of the neutrino flavor in the interactions we observe. In 2016 the NOvA experiment released results for the observation of oscillations in the ν μ → ν e channel, the first HEP result employing CNNs. I will also discuss our approach at event identification on NOvA as well as recent developments in the application of CNN...

  8. Comprehensive reconstruction and visualization of non-coding regulatory networks in human.

    Science.gov (United States)

    Bonnici, Vincenzo; Russo, Francesco; Bombieri, Nicola; Pulvirenti, Alfredo; Giugno, Rosalba

    2014-01-01

    Research attention has been powered to understand the functional roles of non-coding RNAs (ncRNAs). Many studies have demonstrated their deregulation in cancer and other human disorders. ncRNAs are also present in extracellular human body fluids such as serum and plasma, giving them a great potential as non-invasive biomarkers. However, non-coding RNAs have been relatively recently discovered and a comprehensive database including all of them is still missing. Reconstructing and visualizing the network of ncRNAs interactions are important steps to understand their regulatory mechanism in complex systems. This work presents ncRNA-DB, a NoSQL database that integrates ncRNAs data interactions from a large number of well established on-line repositories. The interactions involve RNA, DNA, proteins, and diseases. ncRNA-DB is available at http://ncrnadb.scienze.univr.it/ncrnadb/. It is equipped with three interfaces: web based, command-line, and a Cytoscape app called ncINetView. By accessing only one resource, users can search for ncRNAs and their interactions, build a network annotated with all known ncRNAs and associated diseases, and use all visual and mining features available in Cytoscape.

  9. Finite difference applied to the reconstruction method of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2016-01-01

    Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.

  10. msiDBN: A Method of Identifying Critical Proteins in Dynamic PPI Networks

    Directory of Open Access Journals (Sweden)

    Yuan Zhang

    2014-01-01

    Full Text Available Dynamics of protein-protein interactions (PPIs reveals the recondite principles of biological processes inside a cell. Shown in a wealth of study, just a small group of proteins, rather than the majority, play more essential roles at crucial points of biological processes. This present work focuses on identifying these critical proteins exhibiting dramatic structural changes in dynamic PPI networks. First, a comprehensive way of modeling the dynamic PPIs is presented which simultaneously analyzes the activity of proteins and assembles the dynamic coregulation correlation between proteins at each time point. Second, a novel method is proposed, named msiDBN, which models a common representation of multiple PPI networks using a deep belief network framework and analyzes the reconstruction errors and the variabilities across the time courses in the biological process. Experiments were implemented on data of yeast cell cycles. We evaluated our network construction method by comparing the functional representations of the derived networks with two other traditional construction methods. The ranking results of critical proteins in msiDBN were compared with the results from the baseline methods. The results of comparison showed that msiDBN had better reconstruction rate and identified more proteins of critical value to yeast cell cycle process.

  11. Direct fourier method reconstruction based on unequally spaced fast fourier transform

    International Nuclear Information System (INIS)

    Wu Xiaofeng; Zhao Ming; Liu Li

    2003-01-01

    First, We give an Unequally Spaced Fast Fourier Transform (USFFT) method, which is more exact and theoretically more comprehensible than its former counterpart. Then, with an interesting interpolation scheme, we discusse how to apply USFFT to Direct Fourier Method (DFM) reconstruction of parallel projection data. At last, an emulation experiment result is given. (authors)

  12. Reconstruction of prehistoric plant production and cooking practices by a new isotopic method

    Energy Technology Data Exchange (ETDEWEB)

    Hastorf, C A [California Univ., Los Angeles (USA). Dept. of Anthropology; DeNiro, M J [California Univ., Los Angeles (USA). Dept. of Earth and Space Sciences

    1985-06-06

    A new method is presented based on isotopic analysis of burnt organic matter, allowing the characterization of previously unidentifiable plant remains extracted from archaeological contexts. The method is used to reconstruct prehistoric production, preparation and consumption of plant foods, as well as the use of ceramic vessels, in the Upper Mantaro Valley region of the central Peruvian Andes.

  13. A Comparison of Affect Ratings Obtained with Ecological Momentary Assessment and the Day Reconstruction Method

    Science.gov (United States)

    Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew

    2010-01-01

    Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…

  14. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Kaisserli, Zineb

    2015-01-01

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter

  15. Robust method for stator current reconstruction from DC link in a ...

    African Journals Online (AJOL)

    Using the switching signals and dc link current, this paper presents a new algorithm for the reconstruction of stator currents of an inverter-fed, three-phase induction motor drive. Unlike the classical and improved methods available in literature, the proposed method is neither based on pulse width modulation pattern ...

  16. An assessment of particle filtering methods and nudging for climate state reconstructions

    NARCIS (Netherlands)

    S. Dubinkina (Svetlana); H. Goosse

    2013-01-01

    htmlabstractUsing the climate model of intermediate complexity LOVECLIM in an idealized framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle

  17. Phase microscopy using light-field reconstruction method for cell observation.

    Science.gov (United States)

    Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu

    2015-08-01

    The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    Science.gov (United States)

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  19. The e/h method of energy reconstruction for combined calorimeter

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.

    1999-01-01

    The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger

  20. Convergence analysis for column-action methods in image reconstruction

    DEFF Research Database (Denmark)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2016-01-01

    Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. In this paper we consider the case of noise-free data....... We present a convergence analysis of the column algorithms, we discuss two techniques (loping and flagging) for reducing the work, and we establish some convergence results for methods that utilize these techniques. The performance of the algorithms is illustrated with numerical examples from...

  1. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    Science.gov (United States)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  2. METHOD OF DETERMINING ECONOMICAL EFFICIENCY OF HOUSING STOCK RECONSTRUCTION IN A CITY

    Directory of Open Access Journals (Sweden)

    Petreneva Ol’ga Vladimirovna

    2016-03-01

    Full Text Available RECONSTRUCTION IN A CITY The demand in comfortable housing has always been very high. The building density is not the same in different regions and sometimes there is no land for new housing construction, especially in the central regions of cities. Moreover, in many cities cultural and historical centers remain, which create the historical appearance of the city, that’s why new construction is impossible in these regions. Though taking into account the depreciation and obsolescence, the operation life of many buildings come to an end, they fall into disrepair. In these cases there arises a question on the reconstruction of the existing residential, public and industrial buildings. The aim of the reconstruction is bringing the existing worn-out building stock into correspondence with technical, social and sanitary requirements and living standards and conditions. The authors consider the currency and reasons for reconstruction of residential buildings. They attempt to answer the question, what is more economical efficient: new construction or reconstruction of residential buildings. The article offers a method to calculate the efficiency of residential buildings reconstruction.

  3. Irrigation network design and reconstruction and its analysis by simulation model

    Directory of Open Access Journals (Sweden)

    Čistý Milan

    2014-06-01

    Full Text Available There are many problems related to pipe network rehabilitation, the main one being how to provide an increase in the hydraulic capacity of a system. Because of its complexity the conventional optimizations techniques are poorly suited for solving this task. In recent years some successful attempts to apply modern heuristic methods to this problem have been published. The main part of the paper deals with applying such technique, namely the harmony search methodology, to network rehabilitation optimization considering both technical and economic aspects of the problem. A case study of the sprinkler irrigation system is presented in detail. Two alternatives of the rehabilitation design are compared. The modified linear programming method is used first with new diameters proposed in the existing network so it could satisfy the increased demand conditions with the unchanged topology. This solution is contrasted to the looped one obtained using a harmony search algorithm

  4. Gaining insight into food webs reconstructed by the inverse method

    NARCIS (Netherlands)

    Kones, J.; Soetaert, K.E.R.; Van Oevelen, D.; Owino, J.; Mavuti, K.

    2006-01-01

    The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one

  5. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    Science.gov (United States)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  6. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  7. Environment-based pin-power reconstruction method for homogeneous core calculations

    International Nuclear Information System (INIS)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-01-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  8. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    Science.gov (United States)

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  9. Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.

    Science.gov (United States)

    Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth

    2017-02-01

    Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.

  10. Reconstruction of the limit cycles by the delays method

    International Nuclear Information System (INIS)

    Castillo D, R.; Ortiz V, J.; Calleros M, G.

    2003-01-01

    The boiling water reactors (BWRs) are designed for usually to operate in a stable-lineal regime. In a limit cycle the behavior of the one system is no lineal-stable. In a BWR, instabilities of nuclear- thermohydraulics nature can take the reactor to a limit cycle. The limit cycles should to be avoided since the oscillations of power can cause thermal fatigue to the fuel and/or shroud. In this work the employment of the delays method is analyzed for its application in the detection of limit cycles in a nuclear power plant. The foundations of the method and it application to power signals to different operation conditions are presented. The analyzed signals are: to steady state, nuclear-thermohydraulic instability, a non linear transitory and, finally, failure of a controller plant . Among the main results it was found that the delays method can be applied to detect limit cycles in the power monitors of the BWR reactors. It was also found that the first zero of the autocorrelation function is an appropriate approach to select the delay in the detection of limit cycles, for the analyzed cases. (Author)

  11. One step linear reconstruction method for continuous wave diffuse optical tomography

    Science.gov (United States)

    Ukhrowiyah, N.; Yasin, M.

    2017-09-01

    The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.

  12. Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method

    CERN Document Server

    Kulchitskii, Yu A

    2000-01-01

    Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.

  13. Comparison of Generated Parallel Capillary Arrays to Three-Dimensional Reconstructed Capillary Networks in Modeling Oxygen Transport in Discrete Microvascular Volumes

    Science.gov (United States)

    Fraser, Graham M.; Goldman, Daniel; Ellis, Christopher G.

    2013-01-01

    Objective We compare Reconstructed Microvascular Networks (RMN) to Parallel Capillary Arrays (PCA) under several simulated physiological conditions to determine how the use of different vascular geometry affects oxygen transport solutions. Methods Three discrete networks were reconstructed from intravital video microscopy of rat skeletal muscle (84×168×342 μm, 70×157×268 μm and 65×240×571 μm) and hemodynamic measurements were made in individual capillaries. PCAs were created based on statistical measurements from RMNs. Blood flow and O2 transport models were applied and the resulting solutions for RMN and PCA models were compared under 4 conditions (rest, exercise, ischemia and hypoxia). Results Predicted tissue PO2 was consistently lower in all RMN simulations compared to the paired PCA. PO2 for 3D reconstructions at rest were 28.2±4.8, 28.1±3.5, and 33.0±4.5 mmHg for networks I, II, and III compared to the PCA mean values of 31.2±4.5, 30.6±3.4, and 33.8±4.6 mmHg. Simulated exercise yielded mean tissue PO2 in the RMN of 10.1±5.4, 12.6±5.7, and 19.7±5.7 mmHg compared to 15.3±7.3, 18.8±5.3, and 21.7±6.0 in PCA. Conclusions These findings suggest that volume matched PCA yield different results compared to reconstructed microvascular geometries when applied to O2 transport modeling; the predominant characteristic of this difference being an over estimate of mean tissue PO2. Despite this limitation, PCA models remain important for theoretical studies as they produce PO2 distributions with similar shape and parameter dependence as RMN. PMID:23841679

  14. Analysis of reproducibility of the single photon tomography reconstruction by the method of singular value decomposition

    International Nuclear Information System (INIS)

    Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.

    1997-01-01

    We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction

  15. Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.

    Science.gov (United States)

    Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques

    2012-06-01

    The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies. Copyright © 2012 Wiley Periodicals, Inc.

  16. Online sequential condition prediction method of natural circulation systems based on EOS-ELM and phase space reconstruction

    International Nuclear Information System (INIS)

    Chen, Hanying; Gao, Puzhen; Tan, Sichao; Tang, Jiguo; Yuan, Hongsheng

    2017-01-01

    Highlights: •An online condition prediction method for natural circulation systems in NPP was proposed based on EOS-ELM. •The proposed online prediction method was validated using experimental data. •The training speed of the proposed method is significantly fast. •The proposed method can achieve good accuracy in wide parameter range. -- Abstract: Natural circulation design is widely used in the passive safety systems of advanced nuclear power reactors. The irregular and chaotic flow oscillations are often observed in boiling natural circulation systems so it is difficult for operators to monitor and predict the condition of these systems. An online condition forecasting method for natural circulation system is proposed in this study as an assisting technique for plant operators. The proposed prediction approach was developed based on Ensemble of Online Sequential Extreme Learning Machine (EOS-ELM) and phase space reconstruction. Online Sequential Extreme Learning Machine (OS-ELM) is an online sequential learning neural network algorithm and EOS-ELM is the ensemble method of it. The proposed condition prediction method can be initiated by a small chunk of monitoring data and it can be updated by newly arrived data at very fast speed during the online prediction. Simulation experiments were conducted on the data of two natural circulation loops to validate the performance of the proposed method. The simulation results show that the proposed predication model can successfully recognize different types of flow oscillations and accurately forecast the trend of monitored plant variables. The influence of the number of hidden nodes and neural network inputs on prediction performance was studied and the proposed model can achieve good accuracy in a wide parameter range. Moreover, the comparison results show that the proposed condition prediction method has much faster online learning speed and better prediction accuracy than conventional neural network model.

  17. Measurement methods on the complexity of network

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; DING Gang; CHEN Guo-song

    2010-01-01

    Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.

  18. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  19. A Method to Reconstruct the Solar-Induced Canopy Fluorescence Spectrum from Hyperspectral Measurements

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2014-10-01

    Full Text Available A method for canopy Fluorescence Spectrum Reconstruction (FSR is proposed in this study, which can be used to retrieve the solar-induced canopy fluorescence spectrum over the whole chlorophyll fluorescence emission region from 640–850 nm. Firstly, the radiance of the solar-induced chlorophyll fluorescence (Fs at five absorption lines of the solar spectrum was retrieved by a Spectral Fitting Method (SFM. The Singular Vector Decomposition (SVD technique was then used to extract three basis spectra from a training dataset simulated by the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes. Finally, these basis spectra were linearly combined to reconstruct the Fs spectrum, and the coefficients of them were determined by Weighted Linear Least Squares (WLLS fitting with the five retrieved Fs values. Results for simulated datasets indicate that the FSR method could accurately reconstruct the Fs spectra from hyperspectral measurements acquired by instruments of high Spectral Resolution (SR and Signal to Noise Ratio (SNR. The FSR method was also applied to an experimental dataset acquired in a diurnal experiment. The diurnal change of the reconstructed Fs spectra shows that the Fs radiance around noon was higher than that in the morning and afternoon, which is consistent with former studies. Finally, the potential and limitations of this method are discussed.

  20. Methods of reconstruction of multi-particle events in the new coordinate-tracking setup

    Science.gov (United States)

    Vorobyev, V. S.; Shutenko, V. V.; Zadeba, E. A.

    2018-01-01

    At the Unique Scientific Facility NEVOD (MEPhI), a large coordinate-tracking detector based on drift chambers for investigations of muon bundles generated by ultrahigh energy primary cosmic rays is being developed. One of the main characteristics of the bundle is muon multiplicity. Three methods of reconstruction of multiple events were investigated: the sequential search method, method of finding the straight line and method of histograms. The last method determines the number of tracks with the same zenith angle in the event. It is most suitable for the determination of muon multiplicity: because of a large distance to the point of generation of muons, their trajectories are quasiparallel. The paper presents results of application of three reconstruction methods to data from the experiment, and also first results of the detector operation.

  1. Phase derivative method for reconstruction of slightly off-axis digital holograms.

    Science.gov (United States)

    Guo, Cheng-Shan; Wang, Ben-Yi; Sha, Bei; Lu, Yu-Jie; Xu, Ming-Yuan

    2014-12-15

    A phase derivative (PD) method is proposed for reconstruction of off-axis holograms. In this method, a phase distribution of the tested object wave constrained within 0 to pi radian is firstly worked out by a simple analytical formula; then it is corrected to its right range from -pi to pi according to the sign characteristics of its first-order derivative. A theoretical analysis indicates that this PD method is particularly suitable for reconstruction of slightly off-axis holograms because it only requires the spatial frequency of the reference beam larger than spatial frequency of the tested object wave in principle. In addition, because the PD method belongs to a pure local method with no need of any integral operation or phase shifting algorithm in process of the phase retrieval, it could have some advantages in reducing computer load and memory requirements to the image processing system. Some experimental results are given to demonstrate the feasibility of the method.

  2. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter

  3. Network reconstruction of the mouse secretory pathway applied on CHO cell transcriptome data

    DEFF Research Database (Denmark)

    Lund, Anne Mathilde; Kaas, Christian Schrøder; Brandl, Julian

    2017-01-01

    , counting 801 different components in mouse. By employing our mouse RECON to the CHO-K1 genome in a comparative genomic approach, we could reconstruct the protein secretory pathway of CHO cells counting 764 CHO components. This RECON furthermore facilitated the development of three alternative methods...... to study protein secretion through graphical visualizations of omics data. We have demonstrated the use of these methods to identify potential new and known targets for engineering improved growth and IgG production, as well as the general observation that CHO cells seem to have less strict transcriptional...... regulation of protein secretion than healthy mouse cells.  Conclusions: The RECON of the secretory pathway represents a strong tool for interpretation of data related to protein secretion as illustrated with transcriptomic data of Chinese Hamster Ovary (CHO) cells, the main platform for mammalian protein...

  4. The SF3M approach to 3-D photo-reconstruction for non-expert users: application to a gully network

    Science.gov (United States)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-04-01

    3-D photo-reconstruction (PR) techniques have been successfully used to produce high resolution elevation models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present in challenging scenarios. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-meters-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17% required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two light-weight automatic cameras (1 s time-lapse mode) and a 6 m-long pole is an efficient method for 3-D monitoring of gullies, at a low cost (about EUR 1000 budget for the field equipment) and time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  5. Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.

    2000-03-01

    We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)

  6. Reconstruction of cellular signal transduction networks using perturbation assays and linear programming.

    Science.gov (United States)

    Knapp, Bettina; Kaderali, Lars

    2013-01-01

    Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.

  7. An Evaluation of Phylogenetic Methods for Reconstructing Transmitted HIV Variants using Longitudinal Clonal HIV Sequence Data

    Science.gov (United States)

    McCloskey, Rosemary M.; Liang, Richard H.; Harrigan, P. Richard; Brumme, Zabrina L.

    2014-01-01

    ABSTRACT A population of human immunodeficiency virus (HIV) within a host often descends from a single transmitted/founder virus. The high mutation rate of HIV, coupled with long delays between infection and diagnosis, make isolating and characterizing this strain a challenge. In theory, ancestral reconstruction could be used to recover this strain from sequences sampled in chronic infection; however, the accuracy of phylogenetic techniques in this context is unknown. To evaluate the accuracy of these methods, we applied ancestral reconstruction to a large panel of published longitudinal clonal and/or single-genome-amplification HIV sequence data sets with at least one intrapatient sequence set sampled within 6 months of infection or seroconversion (n = 19,486 sequences, median [interquartile range] = 49 [20 to 86] sequences/set). The consensus of the earliest sequences was used as the best possible estimate of the transmitted/founder. These sequences were compared to ancestral reconstructions from sequences sampled at later time points using both phylogenetic and phylogeny-naive methods. Overall, phylogenetic methods conferred a 16% improvement in reproducing the consensus of early sequences, compared to phylogeny-naive methods. This relative advantage increased with intrapatient sequence diversity (P reconstructing ancestral indel variation, especially within indel-rich regions of the HIV genome. Although further improvements are needed, our results indicate that phylogenetic methods for ancestral reconstruction significantly outperform phylogeny-naive alternatives, and we identify experimental conditions and study designs that can enhance accuracy of transmitted/founder virus reconstruction. IMPORTANCE When HIV is transmitted into a new host, most of the viruses fail to infect host cells. Consequently, an HIV infection tends to be descended from a single “founder” virus. A priority target for the vaccine research, these transmitted/founder viruses are

  8. Anatomic and histological characteristics of vagina reconstructed by McIndoe method

    Directory of Open Access Journals (Sweden)

    Kozarski Jefta

    2009-01-01

    Full Text Available Background/Aim. Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH syndrome and compare them with normal vagina. Methods. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau. Comparatively, 21 females of 18 and more years with normal vaginas were also studied. All the subjects were divided into the groups R (reconstructed and C (control and the subgroups according to age up to 30 years (1 R, 1C, from 30 to 50 (2R, 2C, and over 50 (3R, 3C. Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p < 0.05 was considered statistically significant. Results. The results show that there are differences in the depth and the wideness of reconstructed vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. Conclusion. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.

  9. A method for climate and vegetation reconstruction through the inversion of a dynamic vegetation model

    Energy Technology Data Exchange (ETDEWEB)

    Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)

    2010-08-15

    Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)

  10. Improvements in γ-ray reconstruction with positive sensitive Ge detectors using the backtracking method

    International Nuclear Information System (INIS)

    Milechina, L.; Cederwall, B.

    2003-01-01

    Gamma-ray tracking, a new detection technique for nuclear spectroscopy, requires efficient algorithms for reconstructing the interaction paths of multiple γ rays in a detector volume. In the present work, we discuss the effect of the atomic electron momentum distribution in Ge as well as employment of different types of figure-of-merit within the context of the so called backtracking method

  11. Method and tool for network vulnerability analysis

    Science.gov (United States)

    Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM

    2006-03-14

    A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."

  12. Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR

    Directory of Open Access Journals (Sweden)

    Hou Liying

    2016-10-01

    Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.

  13. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    Science.gov (United States)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  14. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    Science.gov (United States)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  15. Tomography reconstruction methods for damage diagnosis of wood structure in construction field

    Science.gov (United States)

    Qiu, Qiwen; Lau, Denvid

    2018-03-01

    The structural integrity of wood building element plays a critical role in the public safety, which requires effective methods for diagnosis of internal damage inside the wood body. Conventionally, the non-destructive testing (NDT) methods such as X-ray computed tomography, thermography, radar imaging reconstruction method, ultrasonic tomography, nuclear magnetic imaging techniques, and sonic tomography have been used to obtain the information about the internal structure of wood. In this paper, the applications, advantages and disadvantages of these traditional tomography methods are reviewed. Additionally, the present article gives an overview of recently developed tomography approach that relies on the use of mechanical and electromagnetic waves for assessing the structural integrity of wood buildings. This developed tomography reconstruction method is believed to provide a more accurate, reliable, and comprehensive assessment of wood structural integrity

  16. Optical properties reconstruction using the adjoint method based on the radiative transfer equation

    Science.gov (United States)

    Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir

    2018-01-01

    An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.

  17. Rehanging Reynolds at the British Institution: Methods for Reconstructing Ephemeral Displays

    Directory of Open Access Journals (Sweden)

    Catherine Roach

    2016-11-01

    Full Text Available Reconstructions of historic exhibitions made with current technologies can present beguiling illusions, but they also put us in danger of recreating the past in our own image. This article and the accompanying reconstruction explore methods for representing lost displays, with an emphasis on visualizing uncertainty, illuminating process, and understanding the mediated nature of period images. These issues are highlighted in a partial recreation of a loan show held at the British Institution, London, in 1823, which featured the works of Sir Joshua Reynolds alongside continental old masters. This recreation demonstrates how speculative reconstructions can nonetheless shed light on ephemeral displays, revealing powerful visual and conceptual dialogues that took place on the crowded walls of nineteenth-century exhibitions.

  18. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    Science.gov (United States)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  19. Control and estimation methods over communication networks

    CERN Document Server

    Mahmoud, Magdi S

    2014-01-01

    This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: ·        an overall assessment of recent and current fault-tolerant control algorithms; ·        treatment of several issues arising at the junction of control and communications; ·        key concepts followed by their proofs and efficient computational methods for their implementation; and ·        simulation examples (including TrueTime simulations) to...

  20. Compartmentalized metabolic network reconstruction of microbial communities to determine the effect of agricultural intervention on soils

    Science.gov (United States)

    Álvarez-Yela, Astrid Catalina; Gómez-Cano, Fabio; Zambrano, María Mercedes; Husserl, Johana; Danies, Giovanna; Restrepo, Silvia; González-Barrios, Andrés Fernando

    2017-01-01

    Soil microbial communities are responsible for a wide range of ecological processes and have an important economic impact in agriculture. Determining the metabolic processes performed by microbial communities is crucial for understanding and managing ecosystem properties. Metagenomic approaches allow the elucidation of the main metabolic processes that determine the performance of microbial communities under different environmental conditions and perturbations. Here we present the first compartmentalized metabolic reconstruction at a metagenomics scale of a microbial ecosystem. This systematic approach conceives a meta-organism without boundaries between individual organisms and allows the in silico evaluation of the effect of agricultural intervention on soils at a metagenomics level. To characterize the microbial ecosystems, topological properties, taxonomic and metabolic profiles, as well as a Flux Balance Analysis (FBA) were considered. Furthermore, topological and optimization algorithms were implemented to carry out the curation of the models, to ensure the continuity of the fluxes between the metabolic pathways, and to confirm the metabolite exchange between subcellular compartments. The proposed models provide specific information about ecosystems that are generally overlooked in non-compartmentalized or non-curated networks, like the influence of transport reactions in the metabolic processes, especially the important effect on mitochondrial processes, as well as provide more accurate results of the fluxes used to optimize the metabolic processes within the microbial community. PMID:28767679

  1. Compartmentalized metabolic network reconstruction of microbial communities to determine the effect of agricultural intervention on soils.

    Directory of Open Access Journals (Sweden)

    María Camila Alvarez-Silva

    Full Text Available Soil microbial communities are responsible for a wide range of ecological processes and have an important economic impact in agriculture. Determining the metabolic processes performed by microbial communities is crucial for understanding and managing ecosystem properties. Metagenomic approaches allow the elucidation of the main metabolic processes that determine the performance of microbial communities under different environmental conditions and perturbations. Here we present the first compartmentalized metabolic reconstruction at a metagenomics scale of a microbial ecosystem. This systematic approach conceives a meta-organism without boundaries between individual organisms and allows the in silico evaluation of the effect of agricultural intervention on soils at a metagenomics level. To characterize the microbial ecosystems, topological properties, taxonomic and metabolic profiles, as well as a Flux Balance Analysis (FBA were considered. Furthermore, topological and optimization algorithms were implemented to carry out the curation of the models, to ensure the continuity of the fluxes between the metabolic pathways, and to confirm the metabolite exchange between subcellular compartments. The proposed models provide specific information about ecosystems that are generally overlooked in non-compartmentalized or non-curated networks, like the influence of transport reactions in the metabolic processes, especially the important effect on mitochondrial processes, as well as provide more accurate results of the fluxes used to optimize the metabolic processes within the microbial community.

  2. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU

    International Nuclear Information System (INIS)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A.; Vega C, H. R.; Alonso M, O. E.

    2016-10-01

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  3. Novel edge treatment method for improving the transmission reconstruction quality in Tomographic Gamma Scanning.

    Science.gov (United States)

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2018-05-01

    Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A new hierarchical method to find community structure in networks

    Science.gov (United States)

    Saoud, Bilal; Moussaoui, Abdelouahab

    2018-04-01

    Community structure is very important to understand a network which represents a context. Many community detection methods have been proposed like hierarchical methods. In our study, we propose a new hierarchical method for community detection in networks based on genetic algorithm. In this method we use genetic algorithm to split a network into two networks which maximize the modularity. Each new network represents a cluster (community). Then we repeat the splitting process until we get one node at each cluster. We use the modularity function to measure the strength of the community structure found by our method, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our method are highly effective at discovering community structure in both computer-generated and real-world network data.

  5. Social network analysis: Presenting an underused method for nursing research.

    Science.gov (United States)

    Parnell, James Michael; Robinson, Jennifer C

    2018-06-01

    This paper introduces social network analysis as a versatile method with many applications in nursing research. Social networks have been studied for years in many social science fields. The methods continue to advance but remain unknown to most nursing scholars. Discussion paper. English language and interpreted literature was searched from Ovid Healthstar, CINAHL, PubMed Central, Scopus and hard copy texts from 1965 - 2017. Social network analysis first emerged in nursing literature in 1995 and appears minimally through present day. To convey the versatility and applicability of social network analysis in nursing, hypothetical scenarios are presented. The scenarios are illustrative of three approaches to social network analysis and include key elements of social network research design. The methods of social network analysis are underused in nursing research, primarily because they are unknown to most scholars. However, there is methodological flexibility and epistemological versatility capable of supporting quantitative and qualitative research. The analytic techniques of social network analysis can add new insight into many areas of nursing inquiry, especially those influenced by cultural norms. Furthermore, visualization techniques associated with social network analysis can be used to generate new hypotheses. Social network analysis can potentially uncover findings not accessible through methods commonly used in nursing research. Social networks can be analysed based on individual-level attributes, whole networks and subgroups within networks. Computations derived from social network analysis may stand alone to answer a research question or incorporated as variables into robust statistical models. © 2018 John Wiley & Sons Ltd.

  6. Direct fourier methods in 3D-reconstruction from cone-beam data

    International Nuclear Information System (INIS)

    Axelsson, C.

    1994-01-01

    The problem of 3D-reconstruction is encountered in both medical and industrial applications of X-ray tomography. A method able to utilize a complete set of projections complying with Tuys condition was proposed by Grangeat. His method is mathematically exact and consists of two distinct phases. In phase 1 cone-beam projection data are used to produce the derivative of the radon transform. In phase 2, after interpolation, the radon transform data are used to reconstruct the three-dimensional object function. To a large extent our method is an extension of the Grangeat method. Our aim is to reduce the computational complexity, i.e. to produce a faster method. The most taxing procedure during phase 1 is computation of line-integrals in the detector plane. By applying the direct Fourier method in reverse for this computation, we reduce the complexity of phase 1 from O(N 4 ) to O(N 3 logN). Phase 2 can be performed either as a straight 3D-reconstruction or as a sequence of two 2D-reconstructions in vertical and horizontal planes, respectively. Direct Fourier methods can be applied for the 2D- and for the 3D-reconstruction, which reduces the complexity of phase 2 from O(N 4 ) to O(N 3 logN) as well. In both cases, linogram techniques are applied. For 3D-reconstruction the inversion formula contains the second derivative filter instead of the well-known ramp-filter employed in the 2D-case. The derivative filter is more well-behaved than the 2D ramp-filter. This implies that less zeropadding is necessary which brings about a further reduction of the computational efforts. The method has been verified by experiments on simulated data. The image quality is satisfactory and independent of cone-beam angles. For a 512 3 volume we estimate that our method is ten times faster than Grangeats method

  7. DEBRIS FLOW ACTIVITY RECONSTRUCTION USING DENDROGEOMORPHOLOGICAL METHODS. STUDY CASE (PIULE IORGOVANU MOUNTAINS

    Directory of Open Access Journals (Sweden)

    ROXANA VĂIDEAN

    2015-10-01

    Full Text Available Debris Flow Activity Reconstruction Using Dendrogeomorphological Methods. Study Case (Piule Iorgovanu Mountains. Debris flows are one of the most destructive mass-movements that manifest in the mountainous regions around the world. As they usually occur on the steep slopes of the mountain streams where human settlements are scarce, they are hardly monitored. But when they do interact with builtup areas or transportation corridors they cause enormous damages and even casualties. The rise of human pressure in the hazardous regions has led to an increase in the severity of the negative consequences related to debris flows. Consequently, a complete database for hazard assessment of the areas which show evidence of debris flow activity is needed. Because of the lack of archival records knowledge about their frequency remains poor. One of the most precise methods used in the reconstruction of past debris flow activity are dendrogeomorphological methods. Using growth anomalies of the affected trees, a valuable event chronology can be obtained. Therefore, it is the purpose of this study to reconstruct debris flow activity on a small catchment located on the northern slope of Piule Iorgovanu Mountains. The trees growing near the channel of transport and on the debris fan, exhibit different types of disturbances. A number of 98 increment cores, 19 cross-sections and 1 semi-transversal cross-section was used. Based on the growth anomalies identified in the samples there were reconstructed a number of 19 events spanning a period of almost a century.

  8. Influence of image reconstruction methods on statistical parametric mapping of brain PET images

    International Nuclear Information System (INIS)

    Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan

    2007-01-01

    Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)

  9. A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Bo Guo

    2016-03-01

    Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.

  10. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  11. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  12. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  13. SCM: A method to improve network service layout efficiency with network evolution

    Science.gov (United States)

    Zhao, Qi; Zhang, Chuanhao

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently. PMID:29267299

  14. SCM: A method to improve network service layout efficiency with network evolution.

    Science.gov (United States)

    Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.

  15. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  16. Restoration of the analytically reconstructed OpenPET images by the method of convex projections

    Energy Technology Data Exchange (ETDEWEB)

    Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering

    2011-07-01

    We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)

  17. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    Science.gov (United States)

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    Science.gov (United States)

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  19. A Wrapping Method for Inserting Titanium Micro-Mesh Implants in the Reconstruction of Blowout Fractures

    Directory of Open Access Journals (Sweden)

    Tae Joon Choi

    2016-01-01

    Full Text Available Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic.

  20. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  1. Landscapes of human evolution: models and methods of tectonic geomorphology and the reconstruction of hominin landscapes.

    Science.gov (United States)

    Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P

    2011-03-01

    This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Comparison of methods for suppressing edge and aliasing artefacts in iterative x-ray CT reconstruction

    International Nuclear Information System (INIS)

    Zbijewski, Wojciech; Beekman, Freek J

    2006-01-01

    X-ray CT images obtained with iterative reconstruction (IR) can be hampered by the so-called edge and aliasing artefacts, which appear as interference patterns and severe overshoots in the areas of sharp intensity transitions. Previously, we have demonstrated that these artefacts are caused by discretization errors during the projection simulation step in IR. Although these errors are inherent to IR, they can be adequately suppressed by reconstruction on an image grid that is finer than that typically used for analytical methods such as filtered back-projection. Two other methods that may prevent edge artefacts are: (i) smoothing the projections prior to reconstruction or (ii) using an image representation different from voxels; spherically symmetric Kaiser-Bessel functions are a frequently employed example of such a representation. In this paper, we compare reconstruction on a fine grid with the two above-mentioned alternative strategies for edge artefact reduction. We show that the use of a fine grid results in a more adequate suppression of artefacts than the smoothing of projections or using the Kaiser-Bessel image representation

  3. A new wind power prediction method based on chaotic theory and Bernstein Neural Network

    International Nuclear Information System (INIS)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Fan, Xiaochao

    2016-01-01

    The accuracy of wind power prediction is important for assessing the security and economy of the system operation when wind power connects to the grids. However, multiple factors cause a long delay and large errors in wind power prediction. Hence, efficient wind power forecasting approaches are still required for practical applications. In this paper, a new wind power forecasting method based on Chaos Theory and Bernstein Neural Network (BNN) is proposed. Firstly, the largest Lyapunov exponent as a judgment for wind power system's chaotic behavior is made. Secondly, Phase Space Reconstruction (PSR) is used to reconstruct the wind power series' phase space. Thirdly, the prediction model is constructed using the Bernstein polynomial and neural network. Finally, the weights and thresholds of the model are optimized by Primal Dual State Transition Algorithm (PDSTA). The practical hourly data of wind power generation in Xinjiang is used to test this forecaster. The proposed forecaster is compared with several current prominent research findings. Analytical results indicate that the forecasting error of PDSTA + BNN is 3.893% for 24 look-ahead hours, and has lower errors obtained compared with the other forecast methods discussed in this paper. The results of all cases studying confirm the validity of the new forecast method. - Highlights: • Lyapunov exponent is used to verify chaotic behavior of wind power series. • Phase Space Reconstruction is used to reconstruct chaotic wind power series. • A new Bernstein Neural Network to predict wind power series is proposed. • Primal dual state transition algorithm is chosen as the training strategy of BNN.

  4. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    International Nuclear Information System (INIS)

    Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-01-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with

  5. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-08-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0

  6. Diffusion Capillary Phantom vs. Human Data: Outcomes for Reconstruction Methods Depend on Evaluation Medium

    Directory of Open Access Journals (Sweden)

    Sarah D. Lichenstein

    2016-09-01

    Full Text Available Purpose: Diffusion MRI provides a non-invasive way of estimating structural connectivity in the brain. Many studies have used diffusion phantoms as benchmarks to assess the performance of different tractography reconstruction algorithms and assumed that the results can be applied to in vivo studies. Here we examined whether quality metrics derived from a common, publically available, diffusion phantom can reliably predict tractography performance in human white matter tissue. Material and Methods: We compared estimates of fiber length and fiber crossing among a simple tensor model (diffusion tensor imaging, a more complicated model (ball-and-sticks and model-free (diffusion spectrum imaging, generalized q-sampling imaging reconstruction methods using a capillary phantom and in vivo human data (N=14. Results: Our analysis showed that evaluation outcomes differ depending on whether they were obtained from phantom or human data. Specifically, the diffusion phantom favored a more complicated model over a simple tensor model or model-free methods for resolving crossing fibers. On the other hand, the human studies showed the opposite pattern of results, with the model-free methods being more advantageous than model-based methods or simple tensor models. This performance difference was consistent across several metrics, including estimating fiber length and resolving fiber crossings in established white matter pathways. Conclusions: These findings indicate that the construction of current capillary diffusion phantoms tends to favor complicated reconstruction models over a simple tensor model or model-free methods, whereas the in vivo data tends to produce opposite results. This brings into question the previous phantom-based evaluation approaches and suggests that a more realistic phantom or simulation is necessary to accurately predict the relative performance of different tractography reconstruction methods. Acronyms: BSM: ball-and-sticks model; d

  7. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    Science.gov (United States)

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. A flood-based information flow analysis and network minimization method for gene regulatory networks.

    Science.gov (United States)

    Pavlogiannis, Andreas; Mozhayskiy, Vadim; Tagkopoulos, Ilias

    2013-04-24

    Biological networks tend to have high interconnectivity, complex topologies and multiple types of interactions. This renders difficult the identification of sub-networks that are involved in condition- specific responses. In addition, we generally lack scalable methods that can reveal the information flow in gene regulatory and biochemical pathways. Doing so will help us to identify key participants and paths under specific environmental and cellular context. This paper introduces the theory of network flooding, which aims to address the problem of network minimization and regulatory information flow in gene regulatory networks. Given a regulatory biological network, a set of source (input) nodes and optionally a set of sink (output) nodes, our task is to find (a) the minimal sub-network that encodes the regulatory program involving all input and output nodes and (b) the information flow from the source to the sink nodes of the network. Here, we describe a novel, scalable, network traversal algorithm and we assess its potential to achieve significant network size reduction in both synthetic and E. coli networks. Scalability and sensitivity analysis show that the proposed method scales well with the size of the network, and is robust to noise and missing data. The method of network flooding proves to be a useful, practical approach towards information flow analysis in gene regulatory networks. Further extension of the proposed theory has the potential to lead in a unifying framework for the simultaneous network minimization and information flow analysis across various "omics" levels.

  9. A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.

    Science.gov (United States)

    Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo

    2010-01-01

    In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.

  10. Assessment of Reynolds stresses tensor reconstruction methods for synthetic turbulent inflow conditions. Application to hybrid RANS/LES methods

    International Nuclear Information System (INIS)

    Laraufie, Romain; Deck, Sébastien

    2013-01-01

    Highlights: • Present various Reynolds stresses reconstruction methods from a RANS-SA flow field. • Quantify the accuracy of the reconstruction methods for a wide range of Reynolds. • Evaluate the capabilities of the overall process (Reconstruction + SEM). • Provide practical guidelines to realize a streamwise RANS/LES (or WMLES) transition. -- Abstract: Hybrid or zonal RANS/LES approaches are recognized as the most promising way to accurately simulate complex unsteady flows under current computational limitations. One still open issue concerns the transition from a RANS to a LES or WMLES resolution in the stream-wise direction, when near wall turbulence is involved. Turbulence content has then to be prescribed at the transition to prevent from turbulence decay leading to possible flow relaminarization. The present paper aims to propose an efficient way to generate this switch, within the flow, based on a synthetic turbulence inflow condition, named Synthetic Eddy Method (SEM). As the knowledge of the whole Reynolds stresses is often missing, the scope of this paper is focused on generating the quantities required at the SEM inlet from a RANS calculation, namely the first and second order statistics of the aerodynamic field. Three different methods based on two different approaches are presented and their capability to accurately generate the needed aerodynamic values is investigated. Then, the ability of the combination SEM + Reconstruction method to manufacture well-behaved turbulence is demonstrated through spatially developing flat plate turbulent boundary layers. In the mean time, important intrinsic features of the Synthetic Eddy method are pointed out. The necessity of introducing, within the SEM, accurate data, with regards to the outer part of the boundary layer, is illustrated. Finally, user’s guidelines are given depending on the Reynolds number based on the momentum thickness, since one method is suitable for low Reynolds number while the

  11. Social network extraction based on Web: 1. Related superficial methods

    Science.gov (United States)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  12. Methods for the reconstruction of large scale anisotropies of the cosmic ray flux

    Energy Technology Data Exchange (ETDEWEB)

    Over, Sven

    2010-01-15

    In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)

  13. A singular-value method for reconstruction of nonradial and lossy objects.

    Science.gov (United States)

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  14. A simple measurement method of molecular relaxation in a gas by reconstructing acoustic velocity dispersion

    Science.gov (United States)

    Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun

    2018-01-01

    Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N  +  1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.

  15. Evaluation of two methods for using MR information in PET reconstruction

    International Nuclear Information System (INIS)

    Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.

    2013-01-01

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed

  16. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  17. In Silico Genome-Scale Reconstruction and Validation of the Corynebacterium glutamicum Metabolic Network

    DEFF Research Database (Denmark)

    Kjeldsen, Kjeld Raunkjær; Nielsen, J.

    2009-01-01

    A genome-scale metabolic model of the Gram-positive bacteria Corynebacterium glutamicum ATCC 13032 was constructed comprising 446 reactions and 411 metabolite, based on the annotated genome and available biochemical information. The network was analyzed using constraint based methods. The model...... was extensively validated against published flux data, and flux distribution values were found to correlate well between simulations and experiments. The split pathway of the lysine synthesis pathway of C. glutamicum was investigated, and it was found that the direct dehydrogenase variant gave a higher lysine...... yield than the alternative succinyl pathway at high lysine production rates. The NADPH demand of the network was not found to be critical for lysine production until lysine yields exceeded 55% (mmol lysine (mmol glucose)(-1)). The model was validated during growth on the organic acids acetate...

  18. Reconstruction of the esophagojejunostomy by double stapling method using EEA™ OrVil™ in laparoscopic total gastrectomy and proximal gastrectomy

    OpenAIRE

    Hirahara, Noriyuki; Monma, Hiroyuki; Shimojo, Yoshihide; Matsubara, Takeshi; Hyakudomi, Ryoji; Yano, Seiji; Tanaka, Tsuneo

    2011-01-01

    Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST) using a trans-oral anvil delivery system (EEATM OrVilTM) for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off...

  19. Does thorax EIT image analysis depend on the image reconstruction method?

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2013-04-01

    Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.

  20. A comparison of reconstruction methods for undersampled atomic force microscopy images

    International Nuclear Information System (INIS)

    Luo, Yufan; Andersson, Sean B

    2015-01-01

    Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip–sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images. (paper)

  1. A feasible method for clinical delivery verification and dose reconstruction in tomotherapy

    International Nuclear Information System (INIS)

    Kapatoes, J.M.; Olivera, G.H.; Ruchala, K.J.; Smilowitz, J.B.; Reckwerdt, P.J.; Mackie, T.R.

    2001-01-01

    Delivery verification is the process in which the energy fluence delivered during a treatment is verified. This verified energy fluence can be used in conjunction with an image in the treatment position to reconstruct the full three-dimensional dose deposited. A method for delivery verification that utilizes a measured database of detector signal is described in this work. This database is a function of two parameters, radiological path-length and detector-to-phantom distance, both of which are computed from a CT image taken at the time of delivery. Such a database was generated and used to perform delivery verification and dose reconstruction. Two experiments were conducted: a simulated prostate delivery on an inhomogeneous abdominal phantom, and a nasopharyngeal delivery on a dog cadaver. For both cases, it was found that the verified fluence and dose results using the database approach agreed very well with those using previously developed and proven techniques. Delivery verification with a measured database and CT image at the time of treatment is an accurate procedure for tomotherapy. The database eliminates the need for any patient-specific, pre- or post-treatment measurements. Moreover, such an approach creates an opportunity for accurate, real-time delivery verification and dose reconstruction given fast image reconstruction and dose computation tools

  2. A Method for Upper Bounding on Network Access Speed

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip; Patel, A.; Pedersen, Jens Myrup

    2004-01-01

    This paper presents a method for calculating an upper bound on network access speed growth and gives guidelines for further research experiments and simulations. The method is aimed at providing a basis for simulation of long term network development and resource management.......This paper presents a method for calculating an upper bound on network access speed growth and gives guidelines for further research experiments and simulations. The method is aimed at providing a basis for simulation of long term network development and resource management....

  3. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    Science.gov (United States)

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  4. RECONSTRUCTING THE INITIAL DENSITY FIELD OF THE LOCAL UNIVERSE: METHODS AND TESTS WITH MOCK CATALOGS

    International Nuclear Information System (INIS)

    Wang Huiyuan; Mo, H. J.; Yang Xiaohu; Van den Bosch, Frank C.

    2013-01-01

    Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3∼ –1 , much smaller than the translinear scale, which corresponds to a wavenumber of ∼0.15 h Mpc –1

  5. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    International Nuclear Information System (INIS)

    Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y

    2016-01-01

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R

  6. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R

  7. A divisive spectral method for network community detection

    International Nuclear Information System (INIS)

    Cheng, Jianjun; Li, Longjie; Yao, Yukai; Chen, Xiaoyun; Leng, Mingwei; Lu, Weiguo

    2016-01-01

    Community detection is a fundamental problem in the domain of complex network analysis. It has received great attention, and many community detection methods have been proposed in the last decade. In this paper, we propose a divisive spectral method for identifying community structures from networks which utilizes a sparsification operation to pre-process the networks first, and then uses a repeated bisection spectral algorithm to partition the networks into communities. The sparsification operation makes the community boundaries clearer and sharper, so that the repeated spectral bisection algorithm extract high-quality community structures accurately from the sparsified networks. Experiments show that the combination of network sparsification and a spectral bisection algorithm is highly successful, the proposed method is more effective in detecting community structures from networks than the others. (paper: interdisciplinary statistical mechanics)

  8. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    Science.gov (United States)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  9. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  10. SPET reconstruction with a non-uniform attenuation coefficient using an analytical regularizing iterative method

    International Nuclear Information System (INIS)

    Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn, C.

    1982-09-01

    The aim of this study is to evaluate the potential of the RIM technique when used in brain studies. The analytical Regulatorizing Iterative Method (RIM) is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with FBP (Filtered Back Projection) technique. Preliminary results obtained in brain studies using AMPI-123 (isopropil-amphetamine I-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated in our Institution, in comparing quantitative data in heart or liver studies where control values can be obtained

  11. SPET reconstruction with a non-uniform attenuation coefficient using an analytical regularizing iterative method

    International Nuclear Information System (INIS)

    Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn

    1982-01-01

    The potential of the Regularizing Iterative Method (RIM), when used in brain studies, is evaluated. RIM is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with Filtered Back Projection (FBP) technique. Preliminary results obtained in brain studies using isopropil-amphetamine I-123 (AMPI-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated, in comparing quantitative data in heart or liver studies where control values can be obtained

  12. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    International Nuclear Information System (INIS)

    Gao, H

    2016-01-01

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).

  13. Aboveground Biomass Estimation Using Reconstructed Feature of Airborne Discrete-Return LIDAR by Auto-Encoder Neural Network

    Science.gov (United States)

    Li, T.; Wang, Z.; Peng, J.

    2018-04-01

    Aboveground biomass (AGB) estimation is critical for quantifying carbon stocks and essential for evaluating carbon cycle. In recent years, airborne LiDAR shows its great ability for highly-precision AGB estimation. Most of the researches estimate AGB by the feature metrics extracted from the canopy height distribution of the point cloud which calculated based on precise digital terrain model (DTM). However, if forest canopy density is high, the probability of the LiDAR signal penetrating the canopy is lower, resulting in ground points is not enough to establish DTM. Then the distribution of forest canopy height is imprecise and some critical feature metrics which have a strong correlation with biomass such as percentiles, maximums, means and standard deviations of canopy point cloud can hardly be extracted correctly. In order to address this issue, we propose a strategy of first reconstructing LiDAR feature metrics through Auto-Encoder neural network and then using the reconstructed feature metrics to estimate AGB. To assess the prediction ability of the reconstructed feature metrics, both original and reconstructed feature metrics were regressed against field-observed AGB using the multiple stepwise regression (MS) and the partial least squares regression (PLS) respectively. The results showed that the estimation model using reconstructed feature metrics improved R2 by 5.44 %, 18.09 %, decreased RMSE value by 10.06 %, 22.13 % and reduced RMSEcv by 10.00 %, 21.70 % for AGB, respectively. Therefore, reconstructing LiDAR point feature metrics has potential for addressing AGB estimation challenge in dense canopy area.

  14. A novel community detection method in bipartite networks

    Science.gov (United States)

    Zhou, Cangqi; Feng, Liang; Zhao, Qianchuan

    2018-02-01

    Community structure is a common and important feature in many complex networks, including bipartite networks, which are used as a standard model for many empirical networks comprised of two types of nodes. In this paper, we propose a two-stage method for detecting community structure in bipartite networks. Firstly, we extend the widely-used Louvain algorithm to bipartite networks. The effectiveness and efficiency of the Louvain algorithm have been proved by many applications. However, there lacks a Louvain-like algorithm specially modified for bipartite networks. Based on bipartite modularity, a measure that extends unipartite modularity and that quantifies the strength of partitions in bipartite networks, we fill the gap by developing the Bi-Louvain algorithm that iteratively groups the nodes in each part by turns. This algorithm in bipartite networks often produces a balanced network structure with equal numbers of two types of nodes. Secondly, for the balanced network yielded by the first algorithm, we use an agglomerative clustering method to further cluster the network. We demonstrate that the calculation of the gain of modularity of each aggregation, and the operation of joining two communities can be compactly calculated by matrix operations for all pairs of communities simultaneously. At last, a complete hierarchical community structure is unfolded. We apply our method to two benchmark data sets and a large-scale data set from an e-commerce company, showing that it effectively identifies community structure in bipartite networks.

  15. Paediatric cardiac CT examinations: impact of the iterative reconstruction method ASIR on image quality - preliminary findings

    International Nuclear Information System (INIS)

    Mieville, Frederic A.; Gudinchet, Francois; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Bochud, Francois O.; Verdun, Francis R.

    2011-01-01

    Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI vol 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)

  16. Continuous sea-level reconstructions beyond the Pleistocene: improving the Mediterranean sea-level method

    Science.gov (United States)

    Grant, K.; Rohling, E. J.; Amies, J.

    2017-12-01

    Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.

  17. Paediatric cardiac CT examinations: impact of the iterative reconstruction method ASIR on image quality - preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Mieville, Frederic A. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland); University Hospital Center and University of Lausanne, Institute of Radiation Physics - Medical Radiology, Lausanne (Switzerland); Gudinchet, Francois; Rizzo, Elena [University Hospital Center and University of Lausanne, Department of Radiology, Lausanne (Switzerland); Ou, Phalla; Brunelle, Francis [Necker Children' s Hospital, Department of Radiology, Paris (France); Bochud, Francois O.; Verdun, Francis R. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland)

    2011-09-15

    Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI{sub vol} 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)

  18. Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method

    Energy Technology Data Exchange (ETDEWEB)

    Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B. [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30318 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States)

    2012-09-15

    Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on

  19. Vulnerability analysis methods for road networks

    Science.gov (United States)

    Bíl, Michal; Vodák, Rostislav; Kubeček, Jan; Rebok, Tomáš; Svoboda, Tomáš

    2014-05-01

    Road networks rank among the most important lifelines of modern society. They can be damaged by either random or intentional events. Roads are also often affected by natural hazards, the impacts of which are both direct and indirect. Whereas direct impacts (e.g. roads damaged by a landslide or due to flooding) are localized in close proximity to the natural hazard occurrence, the indirect impacts can entail widespread service disabilities and considerable travel delays. The change in flows in the network may affect the population living far from the places originally impacted by the natural disaster. These effects are primarily possible due to the intrinsic nature of this system. The consequences and extent of the indirect costs also depend on the set of road links which were damaged, because the road links differ in terms of their importance. The more robust (interconnected) the road network is, the less time is usually needed to secure the serviceability of an area hit by a disaster. These kinds of networks also demonstrate a higher degree of resilience. Evaluating road network structures is therefore essential in any type of vulnerability and resilience analysis. There are a range of approaches used for evaluation of the vulnerability of a network and for identification of the weakest road links. Only few of them are, however, capable of simulating the impacts of the simultaneous closure of numerous links, which often occurs during a disaster. The primary problem is that in the case of a disaster, which usually has a large regional extent, the road network may remain disconnected. The majority of the commonly used indices use direct computation of the shortest paths or time between OD (origin - destination) pairs and therefore cannot be applied when the network breaks up into two or more components. Since extensive break-ups often occur in cases of major disasters, it is important to study the network vulnerability in these cases as well, so that appropriate

  20. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Kravtsenyuk Olga V

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.

  1. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Vladimir V. Lyubimov

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.

  2. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    Science.gov (United States)

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  3. A new method for three-dimensional laparoscopic ultrasound model reconstruction

    DEFF Research Database (Denmark)

    Fristrup, C W; Pless, T; Durup, J

    2004-01-01

    BACKGROUND: Laparoscopic ultrasound is an important modality in the staging of gastrointestinal tumors. Correct staging depends on good spatial understanding of the regional tumor infiltration. Three-dimensional (3D) models may facilitate the evaluation of tumor infiltration. The aim of the study...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...

  4. Effect of Medial Patellofemoral Ligament Reconstruction Method on Patellofemoral Contact Pressures and Kinematics.

    Science.gov (United States)

    Stephen, Joanna M; Kittl, Christoph; Williams, Andy; Zaffagnini, Stefano; Marcheggiani Muccioli, Giulio Maria; Fink, Christian; Amis, Andrew A

    2016-05-01

    There remains a lack of evidence regarding the optimal method when reconstructing the medial patellofemoral ligament (MPFL) and whether some graft constructs can be more forgiving to surgical errors, such as overtensioning or tunnel malpositioning, than others. The null hypothesis was that there would not be a significant difference between reconstruction methods (eg, graft type and fixation) in the adverse biomechanical effects (eg, patellar maltracking or elevated articular contact pressure) resulting from surgical errors such as tunnel malpositioning or graft overtensioning. Controlled laboratory study. Nine fresh-frozen cadaveric knees were placed on a customized testing rig, where the femur was fixed but the tibia could be moved freely from 0° to 90° of flexion. Individual quadriceps heads and the iliotibial tract were separated and loaded to 205 N of tension using a weighted pulley system. Patellofemoral contact pressures and patellar tracking were measured at 0°, 10°, 20°, 30°, 60°, and 90° of flexion using pressure-sensitive film inserted between the patella and trochlea, in conjunction with an optical tracking system. The MPFL was transected and then reconstructed in a randomized order using a (1) double-strand gracilis tendon, (2) quadriceps tendon, and (3) tensor fasciae latae allograft. Pressure maps and tracking measurements were recorded for each reconstruction method in 2 N and 10 N of tension and with the graft positioned in the anatomic, proximal, and distal femoral tunnel positions. Statistical analysis was undertaken using repeated-measures analyses of variance, Bonferroni post hoc analyses, and paired t tests. Anatomically placed grafts during MPFL reconstruction tensioned to 2 N resulted in the restoration of intact medial joint contact pressures and patellar tracking for all 3 graft types investigated (P > .050). However, femoral tunnels positioned proximal or distal to the anatomic origin resulted in significant increases in the mean

  5. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    International Nuclear Information System (INIS)

    Gao, Z.; Xu, Y.; Downar, T.

    2013-01-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  6. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Z. [Rice University, MS 318, 6100 Main Street, Houston, TX 77005 (United States); Xu, Y. [Argonne National Laboratory, 9700 South Case Ave., Argonne, IL 60439 (United States); Downar, T. [Department of Nuclear Engineering, University of Michigan, 2355 Bonisteel blvd., Ann Arbor, MI 48109 (United States)

    2013-07-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  7. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    Science.gov (United States)

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Geometric morphometric methods for three-dimensional virtual reconstruction of a fragmented cranium: the case of Angelo Poliziano.

    Science.gov (United States)

    Benazzi, S; Stansfield, E; Milani, C; Gruppioni, G

    2009-07-01

    The process of forensic identification of missing individuals is frequently reliant on the superimposition of cranial remains onto an individual's picture and/or facial reconstruction. In the latter, the integrity of the skull or a cranium is an important factor in successful identification. Here, we recommend the usage of computerized virtual reconstruction and geometric morphometrics for the purposes of individual reconstruction and identification in forensics. We apply these methods to reconstruct a complete cranium from facial remains that allegedly belong to the famous Italian humanist of the fifteenth century, Angelo Poliziano (1454-1494). Raw data was obtained by computed tomography scans of the Poliziano face and a complete reference skull of a 37-year-old Italian male. Given that the amount of distortion of the facial remains is unknown, two reconstructions are proposed: The first calculates the average shape between the original and its reflection, and the second discards the less preserved left side of the cranium under the assumption that there is no deformation on the right. Both reconstructions perform well in the superimposition with the original preserved facial surface in a virtual environment. The reconstruction by means of averaging between the original and reflection yielded better results during the superimposition with portraits of Poliziano. We argue that the combination of computerized virtual reconstruction and geometric morphometric methods offers a number of advantages over traditional plastic reconstruction, among which are speed, reproducibility, easiness of manipulation when superimposing with pictures in virtual environment, and assumptions control.

  9. Developing a framework for evaluating tallgrass prairie reconstruction methods and management

    Science.gov (United States)

    Larson, Diane L.; Ahlering, Marissa; Drobney, Pauline; Esser, Rebecca; Larson, Jennifer L.; Viste-Sparkman, Karen

    2018-01-01

    The thousands of hectares of prairie reconstructed each year in the tallgrass prairie biome can provide a valuable resource for evaluation of seed mixes, planting methods, and post-planting management if methods used and resulting characteristics of the prairies are recorded and compiled in a publicly accessible database. The objective of this study was to evaluate the use of such data to understand the outcomes of reconstructions over a 10-year period at two U.S. Fish and Wildlife Service refuges. Variables included number of species planted, seed source (combine-harvest or combine-harvest plus hand-collected), fire history, and planting method and season. In 2015 we surveyed vegetation on 81 reconstructions and calculated proportion of planted species observed; introduced species richness; native species richness, evenness and diversity; and mean coefficient of conservatism. We conducted exploratory analyses to learn how implied communities based on seed mix compared with observed vegetation; which seeding or management variables were influential in the outcome of the reconstructions; and consistency of responses between the two refuges. Insights from this analysis include: 1) proportion of planted species observed in 2015 declined as planted rich