WorldWideScience

Sample records for space derived algorithms

  1. Optimal Fungal Space Searching Algorithms.

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  2. Some nonlinear space decomposition algorithms

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  3. Estimating absorption coefficients of colored dissolved organic matter (CDOM) using a semi-analytical algorithm for Southern Beaufort Sea (Canadian Arctic) waters: application to deriving concentrations of dissolved organic carbon from space

    Matsuoka, A.; Hooker, S. B.; Bricaud, A.; Gentili, B.; Babin, M.

    2012-10-01

    A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM), has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM) was developed for Southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows to separate colored detrital matter (CDM) into CDOM and non-algal particles (NAP) by determining NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, that were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and turbid waters, respectively. In situ measurements showed that dissolved organic carbon (DOC) concentrations were tightly correlated with CDOM absorption (r2 = 0.97). By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the Southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.

  4. Space mapping optimization algorithms for engineering design

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  5. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  6. Estimating absorption coefficients of colored dissolved organic matter (CDOM) using a semi-analytical algorithm for southern Beaufort Sea waters: application to deriving concentrations of dissolved organic carbon from space

    Matsuoka, A.; Hooker, S. B.; Bricaud, A.; Gentili, B.; Babin, M.

    2013-02-01

    A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM), has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM) was developed for southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows the separation of colored detrital matter (CDM) into CDOM and non-algal particles (NAP) through the determination of NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, which were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and coastal waters, respectively. A previous paper (Matsuoka et al., 2012) showed that dissolved organic carbon (DOC) concentrations were tightly correlated with CDOM absorption in our study area (r2 = 0.97). By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.

  7. Estimating absorption coefficients of colored dissolved organic matter (CDOM using a semi-analytical algorithm for southern Beaufort Sea waters: application to deriving concentrations of dissolved organic carbon from space

    A. Matsuoka

    2013-02-01

    Full Text Available A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM, has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM was developed for southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows the separation of colored detrital matter (CDM into CDOM and non-algal particles (NAP through the determination of NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, which were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and coastal waters, respectively. A previous paper (Matsuoka et al., 2012 showed that dissolved organic carbon (DOC concentrations were tightly correlated with CDOM absorption in our study area (r2 = 0.97. By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.

  8. Adaptive Numerical Algorithms in Space Weather Modeling

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  9. Satellite-Derived Bathymetry: Accuracy Assessment on Depths Derivation Algorithm for Shallow Water Area

    Said, N. M.; Mahmud, M. R.; Hasan, R. C.

    2017-10-01

    Over the years, the acquisition technique of bathymetric data has evolved from a shipborne platform to airborne and presently, utilising space-borne acquisition. The extensive development of remote sensing technology has brought in the new revolution to the hydrographic surveying. Satellite-Derived Bathymetry (SDB), a space-borne acquisition technique which derives bathymetric data from high-resolution multispectral satellite imagery for various purposes recently considered as a new promising technology in the hydrographic surveying industry. Inspiring by this latest developments, a comprehensive study was initiated by National Hydrographic Centre (NHC) and Universiti Teknologi Malaysia (UTM) to analyse SDB as a means for shallow water area acquisition. By adopting additional adjustment in calibration stage, a marginal improvement discovered on the outcomes from both Stumpf and Lyzenga algorithms where the RMSE values for the derived (predicted) depths were 1.432 meters and 1.728 meters respectively. This paper would deliberate in detail the findings from the study especially on the accuracy level and practicality of SDB over the tropical environmental setting in Malaysia.

  10. Simple derivation of magnetic space groups

    Bertaut, E.F.; CEA Centre d'Etudes Nucleaires de Grenoble, 38

    1975-01-01

    The magnetic translation lattices can be described by invariant wave vectors k. Advantages of the wave vector notation over the notations used by Belov et al. and Opechowski et al. are pointed out. In a one-dimensional real representation a space group element (α/tau(1)) has either the character +1 (symmetry element) or -1 (antisymmetry element). Thus the square of any space group operation must have the character +1 in a one-dimensional real representation. This simple ''square criterion'' is used to limit the admissible k-vectors and to derive the family of magnetic space groups, i.e. the set of all possible magnetic space groups, belonging to the same crystallographic space group. In the discussion some useful side results are obtained. Not only the real one-dimensional representations of point groups are connected to real one-dimensional representations of space groups, but a direct connection is shown to exist between one-dimensional complex representations of the point groups 3, 4 and 6 and one-dimensional real representations, belonging to P[001/2]=Psub(2c)(Psub(c))-lattices with screw axes 3 1 , 3 2 , 4 2 , 6 2 and 6 4 . Rules are derived for finding the Belov symbol when the Opechowski-Guccione symbol of the magnetic space group is known and this opportunity is used for correcting errors in the Opechowski-Guccione tables [fr

  11. Saving time in a space-efficient simulation algorithm

    Markovski, J.

    2011-01-01

    We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the

  12. Geometry on the parameter space of the belief propagation algorithm on Bayesian networks

    Watanabe, Yodai [National Institute of Informatics, Research Organization of Information and Systems, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 (Japan); Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako-shi, Saitama 351-0198 (Japan)

    2006-01-30

    This Letter considers a geometrical structure on the parameter space of the belief propagation algorithm on Bayesian networks. The statistical manifold of posterior distributions is introduced, and the expression for the information metric on the manifold is derived. The expression is used to construct a cost function which can be regarded as a measure of the distance in the parameter space.

  13. Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

    Gray, Alexander G.; Fischer, Bernd; Schumann, Johann; Buntine, Wray

    2003-01-01

    Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer alge...

  14. A theoretical derivation of the condensed history algorithm

    Larsen, E.W.

    1992-01-01

    Although the Condensed History Algorithm is a successful and widely-used Monte Carlo method for solving electron transport problems, it has been derived only by an ad-hoc process based on physical reasoning. In this paper we show that the Condensed History Algorithm can be justified as a Monte Carlo simulation of an operator-split procedure in which the streaming, angular scattering, and slowing-down operators are separated within each time step. Different versions of the operator-split procedure lead to Ο(Δs) and Ο(Δs 2 ) versions of the method, where Δs is the path-length step. Our derivation also indicates that higher-order versions of the Condensed History Algorithm may be developed. (Author)

  15. Design Optimization of Space Launch Vehicles Using a Genetic Algorithm

    Bayley, Douglas J

    2007-01-01

    .... A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost...

  16. A state space algorithm for the spectral factorization

    Kraffer, F.; Kraffer, F.; Kwakernaak, H.

    1997-01-01

    This paper presents an algorithm for the spectral factorization of a para-Hermitian polynomial matrix. The algorithm is based on polynomial matrix to state space and vice versa conversions, and avoids elementary polynomial operations in computations; It relies on well-proven methods of numerical

  17. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  18. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  19. Convergence of Hybrid Space Mapping Algorithms

    Madsen, Kaj; Søndergaard, Jacob

    2004-01-01

    may be poor, or the method may even fail to converge to a stationary point. We consider a convex combination of the space mapping technique with a classical optimization technique. The function to be optimized has the form \\$H \\$\\backslash\\$circ f\\$ where \\$H: \\$\\backslash\\$dR\\^m \\$\\backslash......\\$mapsto \\$\\backslash\\$dR\\$ is convex and \\$f: \\$\\backslash\\$dR\\^n \\$\\backslash\\$mapsto \\$\\backslash\\$dR\\^m\\$ is smooth. Experience indicates that the combined method maintains the initial efficiency of the space mapping technique. We prove that the global convergence property of the classical technique is also...

  20. Modeling Trees with a Space Colonization Algorithm

    Morell Higueras, Marc

    2014-01-01

    [CATALÀ] Aquest TFG tracta la implementació d'un algorisme de generació procedural que construeixi una estructura reminiscent a la d'un arbre de clima temperat, i també la implementació del pas de l'estructura a un model tridimensional, acompanyat de l'eina per a visualitzar el resultat i fer-ne l'exportació [ANGLÈS] This TFG consists of the implementation of a procedural generation algorithm that builds a structure reminiscent of that of a temperate climate tree, and also consists of the ...

  1. Quantum algorithms for phase-space tomography

    Paz, Juan Pablo; Roncaglia, Augusto Jose; Saraceno, Marcos

    2004-01-01

    We present efficient circuits that can be used for the phase-space tomography of quantum states. The circuits evaluate individual values or selected averages of the Wigner, Kirkwood, and Husimi distributions. These quantum gate arrays can be programmed by initializing appropriate computational states. The Husimi circuit relies on a subroutine that is also interesting in its own right: the efficient preparation of a coherent state, which is the ground state of the Harper Hamiltonian

  2. A space-efficient algorithm for local similarities.

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  3. Algorithms for Planar Graphs and Graphs in Metric Spaces

    Wulff-Nilsen, Christian

    structural properties that can be exploited. For instance, a road network or a wire layout on a microchip is typically (near-)planar and distances in the network are often defined w.r.t. the Euclidean or the rectilinear metric. Specialized algorithms that take advantage of such properties are often orders...... of magnitude faster than the corresponding algorithms for general graphs. The first and main part of this thesis focuses on the development of efficient planar graph algorithms. The most important contributions include a faster single-source shortest path algorithm, a distance oracle with subquadratic...... for geometric graphs and graphs embedded in metric spaces. Roughly speaking, the stretch factor is a real value expressing how well a (geo-)metric graph approximates the underlying complete graph w.r.t. distances. We give improved algorithms for computing the stretch factor of a given graph and for augmenting...

  4. Theoretical algorithms for satellite-derived sea surface temperatures

    Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.

    1989-03-01

    Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.

  5. A Trust Region Aggressive Space Mapping Algorithm for EM

    Bakr., M.; Bandler, J. W.; Biernacki, R.

    1998-01-01

    A robust new algorithm for electromagnetic (EM) optimization of microwave circuits is presented. The algorithm (TRASM) integrates a trust region methodology with the aggressive space mapping (ASM). The trust region ensures that each iteration results in improved alignment between the coarse....... This suggested step exploits all the available EM simulations for improving the uniqueness of parameter extraction. The new algorithm was successfully used to design a number of microwave circuits. Examples include the EM optimization of a double-folded stub filter and of a high-temperature superconducting (HTS...

  6. Greedy Algorithms for Reduced Bases in Banach Spaces

    DeVore, Ronald; Petrova, Guergana; Wojtaszczyk, Przemyslaw

    2013-01-01

    family of PDEs. The performance of this greedy algorithm was initially analyzed in Buffa et al. (Modél. Math. Anal. Numér. 46:595-603, 2012) in the case X=H is a Hilbert space. The results of Buffa et al. (Modél. Math. Anal. Numér. 46:595-603, 2012) were

  7. The CCSDS Lossless Data Compression Algorithm for Space Applications

    Yeh, Pen-Shu; Day, John H. (Technical Monitor)

    2001-01-01

    In the late 80's, when the author started working at the Goddard Space Flight Center (GSFC) for the National Aeronautics and Space Administration (NASA), several scientists there were in the process of formulating the next generation of Earth viewing science instruments, the Moderate Resolution Imaging Spectroradiometer (MODIS). The instrument would have over thirty spectral bands and would transmit enormous data through the communications channel. This was when the author was assigned the task of investigating lossless compression algorithms for space implementation to compress science data in order to reduce the requirement on bandwidth and storage.

  8. Greedy Algorithms for Reduced Bases in Banach Spaces

    DeVore, Ronald

    2013-02-26

    Given a Banach space X and one of its compact sets F, we consider the problem of finding a good n-dimensional space X n⊂X which can be used to approximate the elements of F. The best possible error we can achieve for such an approximation is given by the Kolmogorov width dn(F)X. However, finding the space which gives this performance is typically numerically intractable. Recently, a new greedy strategy for obtaining good spaces was given in the context of the reduced basis method for solving a parametric family of PDEs. The performance of this greedy algorithm was initially analyzed in Buffa et al. (Modél. Math. Anal. Numér. 46:595-603, 2012) in the case X=H is a Hilbert space. The results of Buffa et al. (Modél. Math. Anal. Numér. 46:595-603, 2012) were significantly improved upon in Binev et al. (SIAM J. Math. Anal. 43:1457-1472, 2011). The purpose of the present paper is to give a new analysis of the performance of such greedy algorithms. Our analysis not only gives improved results for the Hilbert space case but can also be applied to the same greedy procedure in general Banach spaces. © 2013 Springer Science+Business Media New York.

  9. Space-Derived Transparency: Players, Policies, Implications, and Synergies

    Kinnan, C

    2001-01-01

    .... Democratization and globalization, the proliferation of information technologies, the availability of commercial space high-resolution imagery, and the growing influence of NGOs invite this question: What is (space-derived...

  10. Optimization of Antennas using a Hybrid Genetic-Algorithm Space-Mapping Algorithm

    Pantoja, M.F.; Bretones, A.R.; Meincke, Peter

    2006-01-01

    A hybrid global-local optimization technique for the design of antennas is presented. It consists of the subsequent application of a Genetic Algorithm (GA) that employs coarse models in the simulations and a space mapping (SM) that refines the solution found in the previous stage. The technique...

  11. Space-Derived Transparency: Players, Policies, Implications, and Synergies

    Kinnan, C

    2001-01-01

    Space-derived transparency will become a common means of monitoring, preventing, and mitigating crises, verifying compliance with treaties and law, and enabling confidence and security building measures...

  12. Chromatic Derivatives, Chromatic Expansions and Associated Spaces

    Ignjatovic, Aleksandar

    2009-01-01

    This paper presents the basic properties of chromatic derivatives and chromatic expansions and provides an appropriate motivation for introducing these notions. Chromatic derivatives are special, numerically robust linear differential operators which correspond to certain families of orthogonal polynomials. Chromatic expansions are series of the corresponding special functions, which possess the best features of both the Taylor and the Shannon expansions. This makes chromatic derivatives and ...

  13. Fractional Dynamics of Genetic Algorithms Using Hexagonal Space Tessellation

    J. A. Tenreiro Machado

    2013-01-01

    Full Text Available The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.

  14. Feature extraction algorithm for space targets based on fractal theory

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  15. Space education: Deriving benefits from industrial consortia

    Stone, Barbara A.; Page, John R.

    1993-01-01

    As the number of spacefaring nations of the world increases, so does the difficulty of competing in a global economy. The development of high technology products and services for space programs, and the economic exploitation of these technologies for national economic growth, requires professionals versed in both technical and commercial aspects of space. Meeting this requirement academically presents two challenges. On the technical side, enrollment in science and engineering is decreasing in some of the spacefaring nations. From the commerce perspective, very few colleges and universities offer specific courses in space business.

  16. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    Fischer, Bernd; Hajian, Arsen; Knuth, Kevin; Schumann, Johann

    2004-04-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable building blocks for the algorithm construction; they can encapsulate advanced algorithms and data structures. A symbolic-algebraic system is used to find closed-form solutions for problems and emerging subproblems. In this paper, we describe the application of AUTOBAYES to the analysis of planetary nebulae images taken by the Hubble Space Telescope. We explain the system architecture, and present in detail the automatic derivation of the scientists' original analysis as well as a refined analysis using clustering models. This study demonstrates that AUTOBAYES is now mature enough so that it can be applied to realistic scientific data analysis tasks.

  17. Constant-work-space algorithms for geometric problems

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  18. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  19. Design optimization of space launch vehicles using a genetic algorithm

    Bayley, Douglas James

    The United States Air Force (USAF) continues to have a need for assured access to space. In addition to flexible and responsive spacelift, a reduction in the cost per launch of space launch vehicles is also desirable. For this purpose, an investigation of the design optimization of space launch vehicles has been conducted. Using a suite of custom codes, the performance aspects of an entire space launch vehicle were analyzed. A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost. The other goals of the design optimization included obtaining the proper altitude and velocity to achieve a low-Earth orbit. Specific mission parameters that are particular to USAF space endeavors were specified at the start of the design optimization process. Solid propellant motors, liquid fueled rockets, and air-launched systems in various configurations provided the propulsion systems for two, three and four-stage launch vehicles. Mass properties models, an aerodynamics model, and a six-degree-of-freedom (6DOF) flight dynamics simulator were all used to model the system. The results show the feasibility of this method in designing launch vehicles that meet mission requirements. Comparisons to existing real world systems provide the validation for the physical system models. However, the ability to obtain a truly minimized cost was elusive. The cost model uses an industry standard approach, however, validation of this portion of the model was challenging due to the proprietary nature of cost figures and due to the dependence of many existing systems on surplus hardware.

  20. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  1. Nodal algorithm derived from a new variational principle

    Watson, Fernando V.

    1995-01-01

    As a by-product of the research being carried on by the author on methods of recovering pin power distribution of PWR cores, a nodal algorithm based on a modified variational principle for the two group diffusion equations has been obtained. The main feature of the new algorithm is the low dimensionality achieved by the reduction of the original diffusion equations to a system of algebraic Eigen equations involving the average sources only, instead of sources and interface group currents used in conventional nodal methods. The advantage of this procedure is discussed and results generated by the new algorithm and by a finite difference code are compared. (author). 2 refs, 7 tabs

  2. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for a Thin Solenoid with Uniform Current Density

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for radially thin solenoids with uniform current density is described in this note. An algorithm for computing the vector potential Aθ is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. The (apparently) new feature of the algorithms described in this note applies to cases where the field point is outside of the bore of the solenoid and the field-point radius approaches the solenoid radius. Since the elliptic integrals of the third kind normally used in computing Bz and Aθ become infinite in this region of parameter space, fields for points with the axial coordinate z outside of the ends of the solenoid and near the solenoid radius are treated by use of elliptic integrals of the third kind of modified argument, derived by use of an addition theorem. Also, the algorithms also avoid the numerical difficulties the textbook solutions have for points near the axis arising from explicit factors of 1/r or 1/r2 in the some of the expressions.

  3. Alert-derivative bimodal space power and propulsion systems

    Houts, M.G.; Ranken, W.A.; Buksa, J.J.

    1994-01-01

    Safe, reliable, low-mass bimodal space power and propulsion systems could have numerous civilian and military applications. This paper discusses potential bimodal systems that could be derived from the ALERT space fission power supply concept. These bimodal concepts have the potential for providing 5 to 10 kW of electrical power and a total impulse of 100 MN-s at an average specific impulse of 770 s. System mass is on the order of 1000 kg

  4. Exact Finite Differences. The Derivative on Non Uniformly Spaced Partitions

    Armando Martínez-Pérez

    2017-10-01

    Full Text Available We define a finite-differences derivative operation, on a non uniformly spaced partition, which has the exponential function as an exact eigenvector. We discuss some properties of this operator and we propose a definition for the components of a finite-differences momentum operator. This allows us to perform exact discrete calculations.

  5. State-Space Modelling of Loudspeakers using Fractional Derivatives

    King, Alexander Weider; Agerkvist, Finn T.

    2015-01-01

    This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response of a fractio......This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response...... of a fractional harmonic oscillator, representing the mechanical part of a loudspeaker, showing the effect of the fractional derivative and its relationship to viscoelasticity. Finally, a loudspeaker model with a fractional order viscoelastic suspension and fractional order voice coil is fit to measurement data...

  6. Derivation of a regional active-optical reflectance sensor corn algorithm

    Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...

  7. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    Dhou, S; Williams, C [Brigham and Women’s Hospital / Harvard Medical School, Boston, MA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States); Lewis, J [University of California at Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  8. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    Dhou, S; Williams, C; Ionascu, D; Lewis, J

    2016-01-01

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  9. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...

  10. Discrete Wigner Function Derivation of the Aaronson–Gottesman Tableau Algorithm

    Lucas Kocia

    2017-07-01

    Full Text Available The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson–Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson–Gottesman algorithm for qubits are likely due only to the fact that the Weyl–Heisenberg group is not in S U ( d for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits.

  11. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  12. Fast algorithms for computing defects and their derivatives in the Regge calculus

    Brewin, Leo

    2011-01-01

    Any practical attempt to solve the Regge equations, these being a large system of non-linear algebraic equations, will almost certainly employ a Newton-Raphson-like scheme. In such cases, it is essential that efficient algorithms be used when computing the defect angles and their derivatives with respect to the leg lengths. The purpose of this paper is to present details of such an algorithm.

  13. On The Effective Construction of Asymmetric Chudnovsky Multiplication Algorithms in Finite Fields Without Derivated Evaluation

    Ballet, Stéphane; Baudru, Nicolas; Bonnecaze, Alexis; Tukumuli, Mila

    2016-01-01

    The Chudnovsky and Chudnovsky algorithm for the multiplication in extensions of finite fields provides a bilinear complexity which is uniformly linear whith respect to the degree of the extension. Recently, Randriambololona has generalized the method, allowing asymmetry in the interpolation procedure and leading to new upper bounds on the bilinear complexity. We describe the effective algorithm of this asymmetric method, without derivated evaluation. Finally, we give examples with the finite ...

  14. Generalized fractional Schroedinger equation with space-time fractional derivatives

    Wang Shaowei; Xu Mingyu

    2007-01-01

    In this paper the generalized fractional Schroedinger equation with space and time fractional derivatives is constructed. The equation is solved for free particle and for a square potential well by the method of integral transforms, Fourier transform and Laplace transform, and the solution can be expressed in terms of Mittag-Leffler function. The Green function for free particle is also presented in this paper. Finally, we discuss the relationship between the cases of the generalized fractional Schroedinger equation and the ones in standard quantum

  15. Research on Geometric Positioning Algorithm of License Plate in Multidimensional Parameter Space

    Yinhua Huan

    2014-05-01

    Full Text Available Considering features of vehicle license plate location method which commonly used, in order to search a consistent location for reference images with license plates feature in multidimensional parameter space, a new algorithm of geometric location is proposed. Geometric location algorithm main include model training and real time search. Which not only adapt the gray-scale linearity and the gray non-linear changes, but also support changes of scale and angle. Compared with the mainstream locating software, numerical results shows under the same test conditions that the position deviation of geometric positioning algorithm is less than 0.5 pixel. Without taking into account the multidimensional parameter space, Geometric positioning algorithm position deviation is less than 1.0 pixel and angle deviation is less than 1.0 degree taking into account the multidimensional parameter space. This algorithm is robust, simple, practical and is better than the traditional method.

  16. New Search Space Reduction Algorithm for Vertical Reference Trajectory Optimization

    Alejandro MURRIETA-MENDOZA

    2016-06-01

    Full Text Available Burning the fuel required to sustain a given flight releases pollution such as carbon dioxide and nitrogen oxides, and the amount of fuel consumed is also a significant expense for airlines. It is desirable to reduce fuel consumption to reduce both pollution and flight costs. To increase fuel savings in a given flight, one option is to compute the most economical vertical reference trajectory (or flight plan. A deterministic algorithm was developed using a numerical aircraft performance model to determine the most economical vertical flight profile considering take-off weight, flight distance, step climb and weather conditions. This algorithm is based on linear interpolations of the performance model using the Lagrange interpolation method. The algorithm downloads the latest available forecast from Environment Canada according to the departure date and flight coordinates, and calculates the optimal trajectory taking into account the effects of wind and temperature. Techniques to avoid unnecessary calculations are implemented to reduce the computation time. The costs of the reference trajectories proposed by the algorithm are compared with the costs of the reference trajectories proposed by a commercial flight management system using the fuel consumption estimated by the FlightSim® simulator made by Presagis®.

  17. Clustering algorithms for Stokes space modulation format recognition

    Boada, Ricard; Borkowski, Robert; Tafur Monroy, Idelfonso

    2015-01-01

    influences the performance of the detection process, particularly at low signal-to-noise ratios. This paper reports on an extensive study of six different clustering algorithms: k-means, expectation maximization, density-based DBSCAN and OPTICS, spectral clustering and maximum likelihood clustering, used...

  18. Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces

    Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel

    The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...

  19. Genetic algorithms for adaptive real-time control in space systems

    Vanderzijp, J.; Choudry, A.

    1988-01-01

    Genetic Algorithms that are used for learning as one way to control the combinational explosion associated with the generation of new rules are discussed. The Genetic Algorithm approach tends to work best when it can be applied to a domain independent knowledge representation. Applications to real time control in space systems are discussed.

  20. Space-bandwidth ratio as a means of choosing between Fresnel and other linear canonical transform algorithms.

    Healy, John J; Sheridan, John T

    2011-05-01

    The product of the spatial and spatial frequency extents of a wave field has proven useful in the analysis of the sampling requirements of numerical simulations. We propose that the ratio of these quantities is also illuminating. We have shown that the distance at which the so-called "direct method" becomes more efficient than the so-called "spectral method" for simulations of Fresnel transforms may be written in terms of this space-bandwidth ratio. We have proposed generalizations of these algorithms for numerical simulations of general ABCD systems and derived expressions for the "transition space-bandwidth ratio," above which the generalization of the spectral method is the more efficient algorithm and below which the generalization of the direct method is preferable.

  1. Spatiality of Derivations of Operator Algebras in Banach Spaces

    Quanyuan Chen

    2011-01-01

    Full Text Available Suppose that A is a transitive subalgebra of B(X and its norm closure A¯ contains a nonzero minimal left ideal I. It is shown that if δ is a bounded reflexive transitive derivation from A into B(X, then δ is spatial and implemented uniquely; that is, there exists T∈B(X such that δ(A=TA−AT for each A∈A, and the implementation T of δ is unique only up to an additive constant. This extends a result of E. Kissin that “if A¯ contains the ideal C(H of all compact operators in B(H, then a bounded reflexive transitive derivation from A into B(H is spatial and implemented uniquely.” in an algebraic direction and provides an alternative proof of it. It is also shown that a bounded reflexive transitive derivation from A into B(X is spatial and implemented uniquely, if X is a reflexive Banach space and A¯ contains a nonzero minimal right ideal I.

  2. 3 x 3 free-space optical router based on crossbar network and its control algorithm

    Hou, Peipei; Sun, Jianfeng; Yu, Zhou; Lu, Wei; Wang, Lijuan; Liu, Liren

    2015-08-01

    A 3 × 3 free-space optical router, which comprises optical switches and polarizing beam splitter (PBS) and based on crossbar network, is proposed in this paper. A control algorithm for the 3 × 3 free-space optical router is also developed to achieve rapid control without rearrangement. In order to test the performance of the network based on 3 × 3 free-space optical router and that of the algorithm developed for the optical router, experiments are designed. The experiment results show that the interconnection network based on the 3 × 3 free-space optical router has low cross talk, fast connection speed. Under the control of the algorithm developed, a non-block and real free interconnection network is obtained based on the 3 × 3 free-space optical router we proposed.

  3. Identification of Fuzzy Inference Systems by Means of a Multiobjective Opposition-Based Space Search Algorithm

    Wei Huang

    2013-01-01

    Full Text Available We introduce a new category of fuzzy inference systems with the aid of a multiobjective opposition-based space search algorithm (MOSSA. The proposed MOSSA is essentially a multiobjective space search algorithm improved by using an opposition-based learning that employs a so-called opposite numbers mechanism to speed up the convergence of the optimization algorithm. In the identification of fuzzy inference system, the MOSSA is exploited to carry out the parametric identification of the fuzzy model as well as to realize its structural identification. Experimental results demonstrate the effectiveness of the proposed fuzzy models.

  4. Sublinear Space Algorithms for the Longest Common Substring Problem

    Kociumaka, Tomasz; Starikovskaya, Tatiana; Vildhøj, Hjalte Wedel

    2014-01-01

    Given m documents of total length n, we consider the problem of finding a longest string common to at least d ≥ 2 of the documents. This problem is known as the longest common substring (LCS) problem and has a classic O(n) space and O(n) time solution (Weiner [FOCS'73], Hui [CPM'92]). However...

  5. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  6. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

    Pavlov, V. M.

    2017-07-01

    The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

  7. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  8. A brief comparison between grid based real space algorithms and spectrum algorithms for electronic structure calculations

    Wang, Lin-Wang

    2006-01-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N 3 ) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  9. Algorithms

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  10. Algorithmic Mapping and Characterization of the Drug-Induced Phenotypic-Response Space of Parasites Causing Schistosomiasis.

    Singh, Rahul; Beasley, Rachel; Long, Thavy; Caffrey, Conor R

    2018-01-01

    Neglected tropical diseases, especially those caused by helminths, constitute some of the most common infections of the world's poorest people. Amongst these, schistosomiasis (bilharzia or 'snail fever'), caused by blood flukes of the genus Schistosoma, ranks second only to malaria in terms of human impact: two hundred million people are infected and close to 800 million are at risk of infection. Drug screening against helminths poses unique challenges: the parasite cannot be cloned and is difficult to target using gene knockouts or RNAi. Consequently, both lead identification and validation involve phenotypic screening, where parasites are exposed to compounds whose effects are determined through the analysis of the ensuing phenotypic responses. The efficacy of leads thus identified derives from one or more or even unknown molecular mechanisms of action. The two most immediate and significant challenges that confront the state-of-the-art in this area are: the development of automated and quantitative phenotypic screening techniques and the mapping and quantitative characterization of the totality of phenotypic responses of the parasite. In this paper, we investigate and propose solutions for the latter problem in terms of the following: (1) mathematical formulation and algorithms that allow rigorous representation of the phenotypic response space of the parasite, (2) application of graph-theoretic and network analysis techniques for quantitative modeling and characterization of the phenotypic space, and (3) application of the aforementioned methodology to analyze the phenotypic space of S. mansoni - one of the etiological agents of schistosomiasis, induced by compounds that target its polo-like kinase 1 (PLK 1) gene - a recently validated drug target. In our approach, first, bio-image analysis algorithms are used to quantify the phenotypic responses of different drugs. Next, these responses are linearly mapped into a low- dimensional space using Principle

  11. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    Fischer, Bernd; Knuth, Kevin; Hajian, Arsen; Schumann, Johann

    2004-01-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable buildin...

  12. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.

  13. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  14. Derivation of Delaware Bay tidal parameters from space shuttle photography

    Zheng, Quanan; Yan, Xiaohai; Klemas, V.

    1993-01-01

    The tide-related parameters of the Delaware Bay are derived from space shuttle time-series photographs. The water areas in the bay are measured from interpretation maps of the photographs with a CALCOMP 9100 digitizer and ERDAS Image Processing System. The corresponding tidal levels are calculated using the exposure time annotated on the photographs. From these data, an approximate function relating the water area to the tidal level at a reference point is determined. Based on the function, the water areas of the Delaware Bay at mean high water (MHW) and mean low water (MLW), below 0 m, and for the tidal zone are inferred. With MHW and MLW areas and the mean tidal range, the authors calculate the tidal influx of the Delaware Bay, which is 2.76 x 1O 9 m 3 . Furthermore, the velocity of flood tide at the bay mouth is determined using the tidal flux and an integral of the velocity distribution function at the cross section between Cape Henlopen and Cape May. The result is 132 cm/s, which compares well with the data on tidal current charts

  15. Structural alphabets derived from attractors in conformational space

    Kleinjung Jens

    2010-02-01

    Full Text Available Abstract Background The hierarchical and partially redundant nature of protein structures justifies the definition of frequently occurring conformations of short fragments as 'states'. Collections of selected representatives for these states define Structural Alphabets, describing the most typical local conformations within protein structures. These alphabets form a bridge between the string-oriented methods of sequence analysis and the coordinate-oriented methods of protein structure analysis. Results A Structural Alphabet has been derived by clustering all four-residue fragments of a high-resolution subset of the protein data bank and extracting the high-density states as representative conformational states. Each fragment is uniquely defined by a set of three independent angles corresponding to its degrees of freedom, capturing in simple and intuitive terms the properties of the conformational space. The fragments of the Structural Alphabet are equivalent to the conformational attractors and therefore yield a most informative encoding of proteins. Proteins can be reconstructed within the experimental uncertainty in structure determination and ensembles of structures can be encoded with accuracy and robustness. Conclusions The density-based Structural Alphabet provides a novel tool to describe local conformations and it is specifically suitable for application in studies of protein dynamics.

  16. Earth rotation excitation mechanisms derived from geodetic space observations

    Göttl, F.; Schmidt, M.

    2009-04-01

    Earth rotation variations are caused by mass displacements and motions in the subsystems of the Earth. Via the satellite Gravity and Climate Experiment (GRACE) gravity field variations can be identified which are caused by mass redistribution in the Earth system. Therefore time variable gravity field models (GFZ RL04, CSR RL04, JPL RL04, ITG-Grace03, GRGS, ...) can be used to derive different impacts on Earth rotation. Furthermore satellite altimetry provides accurate information on sea level anomalies (AVISO, DGFI) which are caused by mass and volume changes of seawater. Since Earth rotation is solely affected by mass variations and motions the volume (steric) effect has to be reduced from the altimetric observations in order to infer oceanic contributions to Earth rotation variations. Therefore the steric effect is estimated from physical ocean parameters such as temperature and salinity changes in the oceans (WOA05, Ishii). In this study specific individual geophysical contributions to Earth rotation variations are identified by means of a multitude of accurate geodetic space observations in combination with a realistic error propagation. It will be shown that due to adjustment of altimetric and/or gravimetric solutions the results for polar motion excitations can be improved.

  17. Metropolis-Hastings Algorithms in Function Space for Bayesian Inverse Problems

    Ernst, Oliver

    2015-01-07

    We consider Markov Chain Monte Carlo methods adapted to a Hilbert space setting. Such algorithms occur in Bayesian inverse problems where the solution is a probability measure on a function space according to which one would like to integrate or sample. We focus on Metropolis-Hastings algorithms and, in particular, we introduce and analyze a generalization of the existing pCN-proposal. This new proposal allows to exploit the geometry or anisotropy of the target measure which in turn might improve the statistical efficiency of the corresponding MCMC method. Numerical experiments for a real-world problem confirm the improvement.

  18. Metropolis-Hastings Algorithms in Function Space for Bayesian Inverse Problems

    Ernst, Oliver

    2015-01-01

    We consider Markov Chain Monte Carlo methods adapted to a Hilbert space setting. Such algorithms occur in Bayesian inverse problems where the solution is a probability measure on a function space according to which one would like to integrate or sample. We focus on Metropolis-Hastings algorithms and, in particular, we introduce and analyze a generalization of the existing pCN-proposal. This new proposal allows to exploit the geometry or anisotropy of the target measure which in turn might improve the statistical efficiency of the corresponding MCMC method. Numerical experiments for a real-world problem confirm the improvement.

  19. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  20. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  1. PTM Along Track Algorithm to Maintain Spacing During Same Direction Pair-Wise Trajectory Management Operations

    Carreno, Victor A.

    2015-01-01

    Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation

  2. Expert-guided evolutionary algorithm for layout design of complex space stations

    Qian, Zhiqin; Bi, Zhuming; Cao, Qun; Ju, Weiguo; Teng, Hongfei; Zheng, Yang; Zheng, Siyu

    2017-08-01

    The layout of a space station should be designed in such a way that different equipment and instruments are placed for the station as a whole to achieve the best overall performance. The station layout design is a typical nondeterministic polynomial problem. In particular, how to manage the design complexity to achieve an acceptable solution within a reasonable timeframe poses a great challenge. In this article, a new evolutionary algorithm has been proposed to meet such a challenge. It is called as the expert-guided evolutionary algorithm with a tree-like structure decomposition (EGEA-TSD). Two innovations in EGEA-TSD are (i) to deal with the design complexity, the entire design space is divided into subspaces with a tree-like structure; it reduces the computation and facilitates experts' involvement in the solving process. (ii) A human-intervention interface is developed to allow experts' involvement in avoiding local optimums and accelerating convergence. To validate the proposed algorithm, the layout design of one-space station is formulated as a multi-disciplinary design problem, the developed algorithm is programmed and executed, and the result is compared with those from other two algorithms; it has illustrated the superior performance of the proposed EGEA-TSD.

  3. A Hybrid Genetic-Algorithm Space-Mapping Tool for the Optimization of Antennas

    Pantoja, Mario Fernández; Meincke, Peter; Bretones, Amelia Rubio

    2007-01-01

    A hybrid global-local optimization technique for the design of antennas is presented. It consists of the subsequent application of a genetic algorithm (GA) that employs coarse models in the simulations and a space mapping (SM) that refines the solution found in the previous stage. The technique...

  4. A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance

    Na Dong

    2016-01-01

    Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.

  5. New algorithms derived from the synthesis method. Application to diffusion problems

    Rouzaud, Philippe.

    1976-05-01

    Two algorithms to compute the neutron distribution in a nuclear reactor are presented. These algorithms, the iterative synthesis method (MSI) and the synthesis method by deflation (MSD), are derived from the classical synthesis method (MSC). They retain the most important advantages of MSC (computing time and memory storage reduced with regard to finite difference methods) and avoid its drawbacks: choice of trial functions; choice of weighting functions; choice of the number of terms (for MSD only). Extensive numerical checks of the three methods (MSC, MSI, MSD) were carried out on two fast reactor configurations described in plane geometry (X,Y). Monoenergetic and multigroup theories were successively used. The use of MSI and MSD allows a significant reduction of the discrepancies between the finite difference method and the synthesis method for the reactivity values and the flux distribution [fr

  6. Algorithms

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  7. Certain integrable system on a space associated with a quantum search algorithm

    Uwano, Y.; Hino, H.; Ishiwatari, Y.

    2007-01-01

    On thinking up a Grover-type quantum search algorithm for an ordered tuple of multiqubit states, a gradient system associated with the negative von Neumann entropy is studied on the space of regular relative configurations of multiqubit states (SR 2 CMQ). The SR 2 CMQ emerges, through a geometric procedure, from the space of ordered tuples of multiqubit states for the quantum search. The aim of this paper is to give a brief report on the integrability of the gradient dynamical system together with quantum information geometry of the underlying space, SR 2 CMQ, of that system

  8. Collaboration space division in collaborative product development based on a genetic algorithm

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  9. Use of GLOBE Observations to Derive a Landsat 8 Split Window Algorithm for Urban Heat Island

    Fagerstrom, L.; Czajkowski, K. P.

    2017-12-01

    Surface temperature has been studied to investigate the warming of urban climates, also known as urban heat islands, which can impact urban planning, public health, pollution levels, and energy consumption. However, the full potential of remotely sensed images is limited when analyzing land surface temperature due to the daunting task of correcting for atmospheric effects. Landsat 8 has two thermal infrared sensors. With two bands in the infrared region, a split window algorithm (SWA), can be applied to correct for atmospheric effects. This project used in situ surface temperature measurements from NASA's ground observation program, the Global Learning and Observations to Benefit the Environment (GLOBE), to derive the correcting coefficients for use in the SWA. The GLOBE database provided land surface temperature data that coincided with Landsat 8 overpasses. The land surface temperature derived from Landsat 8 SWA can be used to analyze for urban heat island effect.

  10. An optimal estimation algorithm to derive Ice and Ocean parameters from AMSR Microwave radiometer observations

    Pedersen, Leif Toudal; Tonboe, Rasmus T.; Høyer, Jacob

    channels as well as the combination of data from multiple sources such as microwave radiometry, scatterometry and numerical weather prediction. Optimal estimation is data assimilation without a numerical model for retrieving physical parameters from remote sensing using a multitude of available information......Global multispectral microwave radiometer measurements have been available for several decades. However, most current sea ice concentration algorithms still only takes advantage of a very limited subset of the available channels. Here we present a method that allows utilization of all available....... The methodology is observation driven and model innovation is limited to the translation between observation space and physical parameter space Over open water we use a semi-empirical radiative transfer model developed by Meissner & Wentz that estimates the multispectral AMSR brightness temperatures, i...

  11. Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses

    Pinar, Ali; Chow, Edmond; Pothen, Alex

    2005-03-18

    This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.

  12. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  13. New algorithms for optical observations of space debris with the TAROT telescopes

    Laas-Bourez, Myrtille; Boer, Michel; Blanchet, Gwendoline; Ducrotte, Etienne; Klotz, Alain

    To preserve the space environment for the future, and to make space expedition safe, we have to improve our knowledge of the debris population in the vicinity of the geostationary orbit. Since 2004, CNES observes satellites in the geostationary orbit with a network of robotic ground based fully automated telescopes. One is located in France and the second being in ESO La Silla, Chile. This system makes real time processing and its wide field of view is useful for detection, systematic survey and tracking both catalogued and uncatalogued objets. We are implementing new, more efficient, image processing algorithms. A new source extraction algorithm based on morphological mathematic, and a "matching-pursuit" algorithm allow to correlate the measurements of the same object on successive images and give an almost nil false detection rate. These new methods allow us to detect objects on the geostationary belt and on other orbits like MEO or GTO. We also improved the timing precision of individual images (few milliseconds) and the precision of the position restitution respective to the celestial frame. Our "delay card" provides an extremely precise date of objects in a picture and our new algorithm accurately extracts stars from background for calibration; Thanks to all these improvements, the overall efficiency and quality of the survey of the geostationary orbit has drastically improved and we can now detect satellites and debris in different orbits like GTO orbit. In this paper we present our new methods and the work we have made for the detection of space debris: the images dating with a card of delay, the accuracy of astronomical calibration, and the robustness of the extracting space debris with different algorithms. The results obtained on the sky will be shown.

  14. State-Space Equations and the First-Phase Algorithm for Signal Control of Single Intersections

    LI Jinyuan; PAN Xin; WANG Xiqin

    2007-01-01

    State-space equations were applied to formulate the queuing and delay of traffic at a single intersection in this paper. The signal control of a single intersection was then modeled as a discrete-time optimal control problem, with consideration of the constraints of stream conflicts, saturation flow rate, minimum green time, and maximum green time. The problem cannot be solved directly due to the nonlinear constraints.However, the results of qualitative analysis were used to develop a first-phase signal control algorithm. Simulation results show that the algorithm substantially reduces the total delay compared to fixed-time control.

  15. Strong Convergence of Hybrid Algorithm for Asymptotically Nonexpansive Mappings in Hilbert Spaces

    Juguo Su

    2012-01-01

    Full Text Available The hybrid algorithms for constructing fixed points of nonlinear mappings have been studied extensively in recent years. The advantage of this methods is that one can prove strong convergence theorems while the traditional iteration methods just have weak convergence. In this paper, we propose two types of hybrid algorithm to find a common fixed point of a finite family of asymptotically nonexpansive mappings in Hilbert spaces. One is cyclic Mann's iteration scheme, and the other is cyclic Halpern's iteration scheme. We prove the strong convergence theorems for both iteration schemes.

  16. Novel Rock Detection Intelligence for Space Exploration Based on Non-Symbolic Algorithms and Concepts

    Yildirim, Sule; Beachell, Ronald L.; Veflingstad, Henning

    2007-01-01

    Future space exploration can utilize artificial intelligence as an integral part of next generation space rover technology to make the rovers more autonomous in performing mission objectives. The main advantage of the increased autonomy through a higher degree of intelligence is that it allows for greater utilization of rover resources by reducing the frequency of time consuming communications between rover and earth. In this paper, we propose a space exploration application of our research on a non-symbolic algorithm and concepts model. This model is based on one of the most recent approaches of cognitive science and artificial intelligence research, a parallel distributed processing approach. We use the Mars rovers. Sprit and Opportunity, as a starting point for proposing what rovers in the future could do if the presented model of non-symbolic algorithms and concepts is embedded in a future space rover. The chosen space exploration application for this paper, novel rock detection, is only one of many potential space exploration applications which can be optimized (through reduction of the frequency of rover-earth communications. collection and transmission of only data that is distinctive/novel) through the use of artificial intelligence technology compared to existing approaches.

  17. Geostationary Communications Satellites as Sensors for the Space Weather Environment: Telemetry Event Identification Algorithms

    Carlton, A.; Cahoy, K.

    2015-12-01

    Reliability of geostationary communication satellites (GEO ComSats) is critical to many industries worldwide. The space radiation environment poses a significant threat and manufacturers and operators expend considerable effort to maintain reliability for users. Knowledge of the space radiation environment at the orbital location of a satellite is of critical importance for diagnosing and resolving issues resulting from space weather, for optimizing cost and reliability, and for space situational awareness. For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to acquire and analyze archived data from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms, collectively called SEER (System Event Evaluation Routine), to statistically analyze power amplifier current and temperature telemetry by identifying deviations from nominal operations or other events and trends of interest. This paper focuses on our work in progress, which currently includes methods for detection of jumps ("spikes", outliers) and step changes (changes in the local mean) in the telemetry. We then examine available space weather data from the NOAA GOES and the NOAA-computed Kp index and sunspot numbers to see what role, if any, it might have played. By combining the results of the algorithm for many components, the spacecraft can be used as a "sensor" for the space radiation environment. Similar events occurring at one time across many component telemetry streams may be indicative of a space radiation event or system-wide health and safety concern. Using SEER on representative datasets of telemetry from Inmarsat and Intelsat, we find events that occur across all or many of

  18. Space-time spectral collocation algorithm for solving time-fractional Tricomi-type equations

    Abdelkawy M.A.

    2016-01-01

    Full Text Available We introduce a new numerical algorithm for solving one-dimensional time-fractional Tricomi-type equations (T-FTTEs. We used the shifted Jacobi polynomials as basis functions and the derivatives of fractional is evaluated by the Caputo definition. The shifted Jacobi Gauss-Lobatt algorithm is used for the spatial discretization, while the shifted Jacobi Gauss-Radau algorithmis applied for temporal approximation. Substituting these approximations in the problem leads to a system of algebraic equations that greatly simplifies the problem. The proposed algorithm is successfully extended to solve the two-dimensional T-FTTEs. Extensive numerical tests illustrate the capability and high accuracy of the proposed methodologies.

  19. Distributed consensus for metamorphic systems using a gossip algorithm for CAT(0) metric spaces

    Bellachehab, Anass; Jakubowicz, Jérémie

    2015-01-01

    We present an application of distributed consensus algorithms to metamorphic systems. A metamorphic system is a set of identical units that can self-assemble to form a rigid structure. For instance, one can think of a robotic arm composed of multiple links connected by joints. The system can change its shape in order to adapt to different environments via reconfiguration of its constituting units. We assume in this work that several metamorphic systems form a network: two systems are connected whenever they are able to communicate with each other. The aim of this paper is to propose a distributed algorithm that synchronizes all the systems in the network. Synchronizing means that all the systems should end up having the same configuration. This aim is achieved in two steps: (i) we cast the problem as a consensus problem on a metric space and (ii) we use a recent distributed consensus algorithm that only make use of metrical notions.

  20. Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression

    Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.

    2013-10-01

    Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.

  1. Higher-derivative superparticle in AdS3 space

    Kozyrev, Nikolay; Krivonos, Sergey; Lechtenfeld, Olaf

    2016-03-01

    Employing the coset approach we construct component actions for a superparticle moving in AdS3 with N =(2 ,0 ), D =3 supersymmetry partially broken to N =2 , d =1 . These actions may contain higher time-derivative terms, which are chosen to possess the same (super)symmetries as the free superparticle. In terms of the nonlinear-realization superfields, the component actions always take a simpler form when written in terms of covariant Cartan forms. We also consider in detail the reduction to the nonrelativistic case and construct the corresponding action of a Newton-Hooke superparticle and its higher-derivative generalizations. The structure of these higher time-derivative generalizations is completely fixed by invariance under the supersymmetric Newton-Hooke algebra extended by two central charges.

  2. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  3. New approach for measuring 3D space by using Advanced SURF Algorithm

    Youm, Minkyo; Min, Byungil; Suh, Kyungsuk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Backgeun [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2013-05-15

    The nuclear disasters compared to natural disaster create a more extreme condition for analyzing and evaluating. In this paper, measuring 3D space and modeling was studied by simple pictures in case of small sand dune. The suggested method can be used for the acquisition of spatial information by robot at the disaster area. As a result, these data are helpful for identify the damaged part, degree of damage and determination of recovery sequences. In this study we are improving computer vision algorithm for 3-D geo spatial information measurement. And confirm by test. First, we can get noticeable improvement of 3-D geo spatial information result by SURF algorithm and photogrammetry surveying. Second, we can confirm not only decrease algorithm running time, but also increase matching points through epi polar line filtering. From the study, we are extracting 3-D model by open source algorithm and delete miss match point by filtering method. However on characteristic of SURF algorithm, it can't find match point if structure don't have strong feature. So we will need more study about find feature point if structure don't have strong feature.

  4. Lost-in-Space Star Identification Using Planar Triangle Principal Component Analysis Algorithm

    Fuqiang Zhou

    2015-01-01

    Full Text Available It is a challenging task for a star sensor to implement star identification and determine the attitude of a spacecraft in the lost-in-space mode. Several algorithms based on triangle method are proposed for star identification in this mode. However, these methods hold great time consumption and large guide star catalog memory size. The star identification performance of these methods requires improvements. To address these problems, a star identification algorithm using planar triangle principal component analysis is presented here. A star pattern is generated based on the planar triangle created by stars within the field of view of a star sensor and the projection of the triangle. Since a projection can determine an index for a unique triangle in the catalog, the adoption of the k-vector range search technique makes this algorithm very fast. In addition, a sharing star validation method is constructed to verify the identification results. Simulation results show that the proposed algorithm is more robust than the planar triangle and P-vector algorithms under the same conditions.

  5. Derivatives, forms and vector fields on the κ-deformed Euclidean space

    Dimitrijevic, Marija; Moeller, Lutz; Tsouchnika, Efrossini

    2004-01-01

    The model of κ-deformed space is an interesting example of a noncommutative space, since it allows a deformed symmetry. In this paper, we present new results concerning different sets of derivatives on the coordinate algebra of κ-deformed Euclidean space. We introduce a differential calculus with two interesting sets of one-forms and higher-order forms. The transformation law of vector fields is constructed in accordance with the transformation behaviour of derivatives. The crucial property of the different derivatives, forms and vector fields is that in an n-dimensional spacetime there are always n of them. This is the key difference with respect to conventional approaches, in which the differential calculus is (n + 1)-dimensional. This work shows that derivative-valued quantities such as derivative-valued vector fields appear in a generic way on noncommutative spaces

  6. Algorithms

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  7. Simple experimentally derived algorithm for computer calculated dose rates associated with /sup 137/Cs gynecologic insertions

    Wrede, D E; Dawalibi, H [King Faisal Specialist Hospital and Research Centre, Department of Medical Physics. Riyadh (Saudi Arabia)

    1980-01-01

    A simple mathematical algorithm is derived from experimental data for dose rates from /sup 137/Cs sources in a finite tissue equivalent medium corresponding to the female pelvis. An analytical expression for a point source of /sup 137/Cs along with a simple numerical integration routine allows for rapid as well as accurate dose rate calculations at points of interest for gynecologic insertions. When compared with theoretical models assuming an infinite unit density medium, the measured dose rates are found to be systematically lower at distances away from a single source; 5 per cent at 2 cm and 10 per cent at 7 cm along the transverse axis. Allowance in the program for print out of dose rates from individual sources to a given point and the feature of source strength modification allows for optimization in terms of increasing the difference in dose rate between reference treatment points and sensitive structures such as the bladder, rectum and colon.

  8. Simple experimentally derived algorithm for computer calculated dose rates associated with 137Cs gynecologic insertions

    Wrede, D.E.; Dawalibi, H.

    1980-01-01

    A simple mathematical algorithm is derived from experimental data for dose rates from 137 Cs sources in a finite tissue equivalent medium corresponding to the female pelvis. An analytical expression for a point source of 137 Cs along with a simple numerical integration routine allows for rapid as well as accurate dose rate calculations at points of interest for gynecologic insertions. When compared with theoretical models assuming an infinite unit density medium, the measured dose rates are found to be systematically lower at distances away from a single source; 5 per cent at 2 cm and 10 per cent at 7 cm along the transverse axis. Allowance in the program for print out of dose rates from individual sources to a given point and the feature of source strength modification allows for optimization in terms of increasing the difference in dose rate between reference treatment points and sensitive structures such as the bladder, rectum and colon. (Auth.)

  9. Assessing the accuracy of algorithm-derived cardiorespiratory fitness in surgical patients: a prospective cohort study.

    Hammal, Fadi; Quaife, Tanis; Purich, Kieran; Haennel, Robert; Gragasin, Ferrante S; Martin-Feeney, Daniella San; Finegan, Barry A

    2017-04-01

    To determine if a non-exercise algorithm-derived assessment of cardiorespiratory fitness (CRF A ) accurately predicted estimated values obtained using a six-minute walk test (CRF 6MWD ) and the Duke Activity Status Index (CRF DASI ). Following research ethics board approval, an observational cohort study was conducted in selected, consenting patients undergoing elective surgery. Participants completed questionnaires assessing their self-reported exercise capacity. Their height, weight, waist circumference, and vital signs were measured. A six-minute walk test was performed twice with a 45-min rest interval between tests. The correlation between CRF A and both CRF 6MWD and CRF DASI was determined. Two hundred forty-two participants were included. Mean age was 62 (range 45-88 yr); 150 (62%) were male, 87 (36%) self-reported walking or jogging > 16 km per week, and 49 (20%) were current smokers. The CRF A and CRF 6MWD were highly correlated (Pearson r = 0.878; P 427 m in the six-minute walk test, CRF A , CRF 6MWD , and CRF DASI were equivalent. A non-exercise algorithm can estimate cardiorespiratory fitness in patients presenting for elective surgery. The variables required to compute CRF A can be obtained in a clinic setting without the need to engage in formal exercise testing. Further evaluation of CRF A as a predictor of long-term outcome in patients is warranted.

  10. A potential theory approach to an algorithm of conceptual space partitioning

    Roman Urban

    2017-12-01

    Full Text Available A potential theory approach to an algorithm of conceptual space partitioning This paper proposes a new classification algorithm for the partitioning of a conceptual space. All the algorithms which have been used until now have mostly been based on the theory of Voronoi diagrams. This paper proposes an approach based on potential theory, with the criteria for measuring similarities between objects in the conceptual space being based on the Newtonian potential function. The notion of a fuzzy prototype, which generalizes the previous definition of a prototype, is introduced. Furthermore, the necessary conditions that a natural concept must meet are discussed. Instead of convexity, as proposed by Gärdenfors, the notion of geodesically convex sets is used. Thus, if a concept corresponds to a set which is geodesically convex, it is a natural concept. This definition applies, for example, if the conceptual space is an Euclidean space. As a by-product of the construction of the algorithm, an extension of the conceptual space to d-dimensional Riemannian manifolds is obtained.   Algorytm podziału przestrzeni konceptualnych przy użyciu teorii potencjału W niniejszej pracy zaproponowany został nowy algorytm podziału przestrzeni konceptualnej. Dotąd podział taki zazwyczaj wykorzystywał teorię diagramów Voronoi. Nasze podejście do problemu oparte jest na teorii potencjału Miara podobieństwa pomiędzy elementami przestrzeni konceptualnej bazuje na Newtonowskiej funkcji potencjału. Definiujemy pojęcie rozmytego prototypu, który uogólnia dotychczas stosowane definicje prototypu. Ponadto zajmujemy się warunkiem koniecznym, który musi spełniać naturalny koncept. Zamiast wypukłości zaproponowanej przez Gärdenforsa, rozważamy linie geodezyjne w obszarze odpowiadającym danemu konceptowi naturalnemu, otrzymując warunek mówiący, że koncept jest konceptem naturalnym, jeżeli zbiór odpowiadający temu konceptowi jest geodezyjnie wypuk

  11. Proportional–Integral–Derivative (PID Controller Tuning using Particle Swarm Optimization Algorithm

    J. S. Bassi

    2012-08-01

    Full Text Available The proportional-integral-derivative (PID controllers are the most popular controllers used in industry because of their remarkable effectiveness, simplicity of implementation and broad applicability. However, manual tuning of these controllers is time consuming, tedious and generally lead to poor performance. This tuning which is application specific also deteriorates with time as a result of plant parameter changes. This paper presents an artificial intelligence (AI method of particle swarm optimization (PSO algorithm for tuning the optimal proportional-integral derivative (PID controller parameters for industrial processes. This approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency over the conventional methods. Ziegler- Nichols, tuning method was applied in the PID tuning and results were compared with the PSO-Based PID for optimum control. Simulation results are presented to show that the PSO-Based optimized PID controller is capable of providing an improved closed-loop performance over the Ziegler- Nichols tuned PID controller Parameters. Compared to the heuristic PID tuning method of Ziegler-Nichols, the proposed method was more efficient in improving the step response characteristics such as, reducing the steady-states error; rise time, settling time and maximum overshoot in speed control of DC motor.

  12. Freeing Space for NASA: Incorporating a Lossless Compression Algorithm into NASA's FOSS System

    Fiechtner, Kaitlyn; Parker, Allen

    2011-01-01

    NASA's Fiber Optic Strain Sensing (FOSS) system can gather and store up to 1,536,000 bytes (1.46 megabytes) per second. Since the FOSS system typically acquires hours - or even days - of data, the system can gather hundreds of gigabytes of data for a given test event. To store such large quantities of data more effectively, NASA is modifying a Lempel-Ziv-Oberhumer (LZO) lossless data compression program to compress data as it is being acquired in real time. After proving that the algorithm is capable of compressing the data from the FOSS system, the LZO program will be modified and incorporated into the FOSS system. Implementing an LZO compression algorithm will instantly free up memory space without compromising any data obtained. With the availability of memory space, the FOSS system can be used more efficiently on test specimens, such as Unmanned Aerial Vehicles (UAVs) that can be in flight for days. By integrating the compression algorithm, the FOSS system can continue gathering data, even on longer flights.

  13. The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.

    Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang

    2017-10-13

    Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

  14. Computational derivation of quantum relativist electromagnetic systems with forward-backward space-time shifts

    Dubois, Daniel M.

    2000-01-01

    This paper is a continuation of our preceding paper dealing with computational derivation of the Klein-Gordon quantum relativist equation and the Schroedinger quantum equation with forward and backward space-time shifts. The first part introduces forward and backward derivatives for discrete and continuous systems. Generalized complex discrete and continuous derivatives are deduced. The second part deduces the Klein-Gordon equation from the space-time complex continuous derivatives. These derivatives take into account forward-backward space-time shifts related to an internal phase velocity u. The internal group velocity v is related to the speed of light u.v=c 2 and to the external group and phase velocities u.v=v g .v p . Without time shift, the Schroedinger equation is deduced, with a supplementary term, which could represent a reference potential. The third part deduces the Quantum Relativist Klein-Gordon equation for a particle in an electromagnetic field

  15. SLS-Derived Lab: Precursor to Deep Space Human Exploration

    Griffin, Brand; Lewis, Ruthan; Eppler, Dean; Smitherman, David

    2014-01-01

    Plans to send humans to Mars are in work and the launch system is being built. Are we ready? Robotic missions have successfully demonstrated transportation, entry, landing and surface operations but for human missions there are significant, potentially show-stopping issues. These issues, called Strategic Knowledge Gaps (SKGs) are the unanswered questions concerning long-duration exploration beyond low-earth-orbit. The gaps represent a risk of loss of life or mission and because they require extended exposure to the weightless environment outside earth's protective geo-magnetic field they cannot be resolved on the earth or on the International Space Station (ISS). Placing a laboratory at the relatively close and stable lunar Distant Retrograde Orbit (DRO) provides an accessible location with the requisite environmental conditions for conducting SKG research and testing mitigation solutions. Configurations comprised of multiple 3 meter and 4.3 meter diameter modules have been studied but the most attractive solution uses elements of the human Mars launch vehicle or Space Launch System (SLS) for a Mars proving ground laboratory. A shortened version of an SLS hydrogen propellant tank creates a Skylab-like pressure vessel that flies fully outfitted on a single launch. This not only offers significant savings by incorporating SLS pressure vessel development costs but avoids the expensive ISS approach using many launches with substantial on-orbit assembly before becoming operational. One of the most challenging SKGs is crew radiation protection; this is why SKG laboratory research is combined with Mars transit Habitat systems development. Fundamentally, the two cannot be divorced because using the habitat systems for protection requires actual hardware geometry and material properties intended to contribute to shielding effectiveness. The SKGs are difficult problems, solutions are not obvious, and require integrated, iterative, and multi-disciplinary development. A lunar

  16. Evaluation of sensor placement algorithms for on-orbit identification of space platforms

    Glassburn, Robin S.; Smith, Suzanne Weaver

    1994-01-01

    Anticipating the construction of the international space station, on-orbit modal identification of space platforms through optimally placed accelerometers is an area of recent activity. Unwanted vibrations in the platform could affect the results of experiments which are planned. Therefore, it is important that sensors (accelerometers) be strategically placed to identify the amount and extent of these unwanted vibrations, and to validate the mathematical models used to predict the loads and dynamic response. Due to cost, installation, and data management issues, only a limited number of sensors will be available for placement. This work evaluates and compares four representative sensor placement algorithms for modal identification. Most of the sensor placement work to date has employed only numerical simulations for comparison. This work uses experimental data from a fully-instrumented truss structure which was one of a series of structures designed for research in dynamic scale model ground testing of large space structures at NASA Langley Research Center. Results from this comparison show that for this cantilevered structure, the algorithm based on Guyan reduction is rated slightly better than that based on Effective Independence.

  17. Algorithms

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  18. Algorithms

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  19. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  20. SLS-Derived Lab- Precursor to Deep Space Human Exploration

    Griffin, Brand M.; Lewis, Ruthan; Eppler, Dean; Smitherman, David

    2015-01-01

    Plans to send humans to Mars are in the works and the launch system is being built. Are we ready? Transportation, entry, landing, and surface operations have been successfully demonstrated for robotic missions. However, for human missions, there are significant, potentially show-stopping issues. These issues, called Strategic Knowledge Gaps (SKGs), are the unanswered questions concerning long duration exploration Beyond low Earth Orbit (BEO). The gaps represent a risk of loss of life or mission and because they require extended exposure to the weightless environment outside of earth's protective geo-magnetic field, they cannot be resolved on Earth or on the International Space Station (ISS). Placing a laboratory at a relatively close and stable lunar Distant Retrograde Orbit (DRO) provides an accessible location with the requisite environmental conditions for conducting SKG research and testing mitigation solutions. Configurations comprised of multiple 3 m and 4.3 m diameter modules have been studied but the most attractive solution uses elements of the human Mars launch vehicle or Space Launch System (SLS) for a Mars proving ground laboratory. A shortened version of an SLS hydrogen propellant tank creates a Skylab-like pressure vessel that flies fully outfitted on a single launch. This not only offers significant savings by incorporating SLS pressure vessel development costs but avoids the expensive ISS approach using many launches with substantial on-orbit assembly before becoming operational. One of the most challenging SKGs is crew radiation protection; this is why SKG laboratory research is combined with Mars transit habitat systems development. Fundamentally, the two cannot be divorced because using the habitat systems for protection requires actual hardware geometry and material properties intended to contribute to shielding effectiveness. The SKGs are difficult problems. The solutions to these problems are not obvious; they require integrated, iterative

  1. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space.

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-07-01

    UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.

  2. A wheat cold resistance mutant derived from space mutagenesis

    Li Peng; Sun Mingzhu; Zhang Fengyun; Gao Guoqiang; Qiu Denglin; Li Xinhua

    2012-01-01

    A cold resistance mutant, obtained by spaceflight mutagenesis on the seeds of wheat variety Han6172, and the DNA of cold resistance mutant and contrast Han6172 were compared by SRAP technique. 380 pairs of primers were screened, 6 pairs of them had polymorphisms between mutant and contrast, the rate was 1.58%, and this data indicated that there are no obvious DNA differences between mutant and contrast Six specific fragments were obtained, 3 fragments of them were amplified in mutant. Homology analysis in GenBank showed that Me3-Em7-Mt, Me4-Em11-CK, Me7-Em19-CK and Me6-Em9-Mt all had homologous sequences with wheat chromosome 3B-specific BAC library, and this result indicated that the gene and regulator sequences associated with mutant cold resistance might locate on 3B chromosome. It was speculated that space mutation induced the mutation of 3B chromosome primary structure, and influenced the expressions of cold resistance genes, which resulted in the mutation of cold resistance ability. (authors)

  3. A wheat cold resistance mutant derived from space mutagenesis

    Li Peng; Sun Mingzhu; Zhang Fengyun; Gao Guoqiang; Qiu Denglin; Li Xinhua

    2011-01-01

    A cold resistance mutant, obtained by spaceflight mutagenesis on the seeds of wheat variety Han6172, and the DNA of cold resistance mutant and contrast Han6172 were compared by SRAP technique. 380 pairs of primers were screened, 6 pairs of them had polymorphisms between mutant and contrast, the rate was 1.58%, and this data indicated that there are no obvious DNA differences between mutant and contrast. Six specific fragments were obtained, 3 fragments of them were amplified in mutant. Homology analysis in GenBank showed that Me3-Em7-Mt, Me4-Em11-CK, Me7-Em19-CK and Me6-Em9-Mt all had homologous sequences with wheat chromosome 3B-specific BAC library, and this result indicated that the gene and regulator sequences associated with mutant cold resistance might locate on 3B chromosome. It was speculated that space mutation induced the mutation of 3B chromosome primary structure, and influenced the expressions of cold resistance genes, which resulted in the mutation of cold resistance ability. (authors)

  4. Algorithms

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  5. The derivation of distributed termination detection algorithms from garbage collection schemes

    Tel, G.; Mattern, F.

    1990-01-01

    It is shown that the termination detection problem for distributed computations can be modelled as an instance of the garbage collection problem. Consequently, algorithms for the termination detection problem are obtained by applying transformations to garbage collection algorithms. The

  6. A Gaussian process and derivative spectral-based algorithm for red blood cell segmentation

    Xue, Yingying; Wang, Jianbiao; Zhou, Mei; Hou, Xiyue; Li, Qingli; Liu, Hongying; Wang, Yiting

    2017-07-01

    As an imaging technology used in remote sensing, hyperspectral imaging can provide more information than traditional optical imaging of blood cells. In this paper, an AOTF based microscopic hyperspectral imaging system is used to capture hyperspectral images of blood cells. In order to achieve the segmentation of red blood cells, Gaussian process using squared exponential kernel function is applied first after the data preprocessing to make the preliminary segmentation. The derivative spectrum with spectral angle mapping algorithm is then applied to the original image to segment the boundary of cells, and using the boundary to cut out cells obtained from the Gaussian process to separated adjacent cells. Then the morphological processing method including closing, erosion and dilation is applied so as to keep adjacent cells apart, and by applying median filtering to remove noise points and filling holes inside the cell, the final segmentation result can be obtained. The experimental results show that this method appears better segmentation effect on human red blood cells.

  7. Conformational Space Annealing explained: A general optimization algorithm, with diverse applications

    Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung

    2018-02-01

    Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.

  8. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  9. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  10. Quality Control Algorithms for the Kennedy Space Center 50-Megahertz Doppler Radar Wind Profiler Winds Database

    Barbre, Robert E., Jr.

    2012-01-01

    This paper presents the process used by the Marshall Space Flight Center Natural Environments Branch (EV44) to quality control (QC) data from the Kennedy Space Center's 50-MHz Doppler Radar Wind Profiler for use in vehicle wind loads and steering commands. The database has been built to mitigate limitations of using the currently archived databases from weather balloons. The DRWP database contains wind measurements from approximately 2.7-18.6 km altitude at roughly five minute intervals for the August 1997 to December 2009 period of record, and the extensive QC process was designed to remove spurious data from various forms of atmospheric and non-atmospheric artifacts. The QC process is largely based on DRWP literature, but two new algorithms have been developed to remove data contaminated by convection and excessive first guess propagations from the Median Filter First Guess Algorithm. In addition to describing the automated and manual QC process in detail, this paper describes the extent of the data retained. Roughly 58% of all possible wind observations exist in the database, with approximately 100 times as many complete profile sets existing relative to the EV44 balloon databases. This increased sample of near-continuous wind profile measurements may help increase launch availability by reducing the uncertainty of wind changes during launch countdown

  11. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  12. A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot

    Lingbo Cheng

    2014-12-01

    Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.

  13. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for Circular Current Loops in Cylindrical Coordinates

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-24

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r2 in the some of the expressions.

  14. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  15. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  16. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  17. Cryptographic analysis on the key space of optical phase encryption algorithm based on the design of discrete random phase mask

    Lin, Chao; Shen, Xueju; Li, Zengyan

    2013-07-01

    The key space of phase encryption algorithm using discrete random phase mask is investigated by numerical simulation in this paper. Random phase mask with finite and discrete phase levels is considered as the core component in most practical optical encryption architectures. The key space analysis is based on the design criteria of discrete random phase mask. The role of random amplitude mask and random phase mask in optical encryption system is identified from the perspective of confusion and diffusion. The properties of discrete random phase mask in a practical double random phase encoding scheme working in both amplitude encoding (AE) and phase encoding (PE) modes are comparably analyzed. The key space of random phase encryption algorithm is evaluated considering both the encryption quality and the brute-force attack resistibility. A method for enlarging the key space of phase encryption algorithm is also proposed to enhance the security of optical phase encryption techniques.

  18. MATLAB-based algorithm to estimate depths of isolated thin dike-like sources using higher-order horizontal derivatives of magnetic anomalies.

    Ekinci, Yunus Levent

    2016-01-01

    This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.

  19. Space transportation. [user needs met by information derived from satellites and the interface with space transportation systems

    1975-01-01

    User-oriented panels were formed to examine practical applications of information or services derived from earth orbiting satellites. Topics discussed include: weather and climate; uses of communication; land use planning; agriculture, forest, and range; inland water resources; retractable resources; environmental quality; marine and maritime uses; and materials processing in space. Emphasis was placed on the interface of the space transportation system (STS) with the applications envisioned by the user panels. User requirements were compared with expected STS capabilities in terms of availability, carrying payload to orbit, and estimated costs per launch. Conclusions and recommendations were reported.

  20. A Self Consistent Multiprocessor Space Charge Algorithm that is Almost Embarrassingly Parallel

    Nissen, Edward; Erdelyi, B.; Manikonda, S.L.

    2012-01-01

    We present a space charge code that is self consistent, massively parallelizeable, and requires very little communication between computer nodes; making the calculation almost embarrassingly parallel. This method is implemented in the code COSY Infinity where the differential algebras used in this code are important to the algorithm's proper functioning. The method works by calculating the self consistent space charge distribution using the statistical moments of the test particles, and converting them into polynomial series coefficients. These coefficients are combined with differential algebraic integrals to form the potential, and electric fields. The result is a map which contains the effects of space charge. This method allows for massive parallelization since its statistics based solver doesn't require any binning of particles, and only requires a vector containing the partial sums of the statistical moments for the different nodes to be passed. All other calculations are done independently. The resulting maps can be used to analyze the system using normal form analysis, as well as advance particles in numbers and at speeds that were previously impossible.

  1. Derivation and validation of the automated search algorithms to identify cognitive impairment and dementia in electronic health records.

    Amra, Sakusic; O'Horo, John C; Singh, Tarun D; Wilson, Gregory A; Kashyap, Rahul; Petersen, Ronald; Roberts, Rosebud O; Fryer, John D; Rabinstein, Alejandro A; Gajic, Ognjen

    2017-02-01

    Long-term cognitive impairment is a common and important problem in survivors of critical illness. We developed electronic search algorithms to identify cognitive impairment and dementia from the electronic medical records (EMRs) that provide opportunity for big data analysis. Eligible patients met 2 criteria. First, they had a formal cognitive evaluation by The Mayo Clinic Study of Aging. Second, they were hospitalized in intensive care unit at our institution between 2006 and 2014. The "criterion standard" for diagnosis was formal cognitive evaluation supplemented by input from an expert neurologist. Using all available EMR data, we developed and improved our algorithms in the derivation cohort and validated them in the independent validation cohort. Of 993 participants who underwent formal cognitive testing and were hospitalized in intensive care unit, we selected 151 participants at random to form the derivation and validation cohorts. The automated electronic search algorithm for cognitive impairment was 94.3% sensitive and 93.0% specific. The search algorithms for dementia achieved respective sensitivity and specificity of 97% and 99%. EMR search algorithms significantly outperformed International Classification of Diseases codes. Automated EMR data extractions for cognitive impairment and dementia are reliable and accurate and can serve as acceptable and efficient alternatives to time-consuming manual data review. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Parallel field line and stream line tracing algorithms for space physics applications

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  3. Comparison Of Quantitative Precipitation Estimates Derived From Rain Gauge And Radar Derived Algorithms For Operational Flash Flood Support.

    Streubel, D. P.; Kodama, K.

    2014-12-01

    To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.

  4. Algorithm for calculating synthetic seismograms in a layered half-space with application of matrix impedance

    Pavlov, V. M.

    2013-01-01

    A new algorithm is proposed for calculating the complete synthetic seismograms from a point source in the form of the sum of a single force and a dipole with an arbitrary seismic moment tensor in a plane layered medium composed of homogenous elastic isotropic layers. Following the idea of (Alekseev and Mikhailenko, 1978), an artificial cylindrical boundary is introduced, on which the boundary conditions are specified. For this modified problem, the exact solution (in terms of the displacements and stresses on the horizontal plane areal element) in the frequency domain is derived and substantiated. The unknown depth-dependent coefficients form the motion-stress vector, whose components satisfy the known system of ordinary differential equations. This system is solved by the method that involves the matrix impedance and propagator for the vector of motion, as previously suggested by the author in (Pavlov, 2009). In relation to the initial problem, the reflections from the artificial boundary are noise, which, to a certain degree, can be suppressed by selecting a long enough distance to this boundary and owing to the presence of a purely imaginary addition to the frequency. The algorithm is not constrained by the thickness of the layers, is applicable for any frequency range, and is suitable for computing the static offset.

  5. An algorithm for the estimation of road traffic space mean speeds from double loop detector data

    Martinez-Diaz, M.; Perez Perez, I.

    2016-07-01

    Most algorithms trying to analyze or forecast road traffic rely on many inputs, but in practice, calculations are usually limited by the available data and measurement equipment. Generally, some of these inputs are substituted by raw or even inappropriate estimations, which in some cases come into conflict with the fundamentals of traffic flow theory. This paper refers to one common example of these bad practices. Many traffic management centres depend on the data provided by double loop detectors, which supply, among others, vehicle speeds. The common data treatment is to compute the arithmetic mean of these speeds over different aggregation periods (i.e. the time mean speeds). Time mean speed is not consistent with Edie’s generalized definitions of traffic variables, and therefore it is not the average speed which relates flow to density. This means that current practice begins with an error that can have negative effects in later studies and applications. The algorithm introduced in this paper enables easily the estimation of space mean speeds from the data provided by the loops. It is based on two key hypotheses: stationarity of traffic and log-normal distribution of the individual speeds in each time interval of aggregation. It could also be used in case of transient traffic as a part of any data fusion methodology. (Author)

  6. Color segmentation in the HSI color space using the K-means algorithm

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue

  7. On-board event processing algorithms for a CCD-based space borne X-ray spectrometer

    Chun, H.J.; Bowles, J.A.; Branduardi-Raymont, G.; Gowen, R.A.

    1996-01-01

    This paper describes two alternative algorithms which are applied to reduce the telemetry requirements for a Charge Coupled Device (CCD) based, space-borne, X-ray spectrometer by on-board reconstruction of the X-ray events split over two or more adjacent pixels. The algorithms have been developed for the Reflection Grating Spectrometer (RGS) on the X-ray multi-mirror (XMM) mission, the second cornerstone project in the European Space Agency's Horizon 2000 programme. The overall instrument and some criteria which provide the background of the development of the algorithms, implemented in Tartan ADA on an MA31750 microprocessor, are described. The on-board processing constraints and requirements are discussed, and the performances of the algorithms are compared. Test results are presented which show that the recursive implementation is faster and has a smaller executable file although it uses more memory because of its stack requirements. (orig.)

  8. A semi-supervised classification algorithm using the TAD-derived background as training data

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  9. Space station automation study: Automation requriements derived from space manufacturing concepts,volume 2

    1984-01-01

    Automation reuirements were developed for two manufacturing concepts: (1) Gallium Arsenide Electroepitaxial Crystal Production and Wafer Manufacturing Facility, and (2) Gallium Arsenide VLSI Microelectronics Chip Processing Facility. A functional overview of the ultimate design concept incoporating the two manufacturing facilities on the space station are provided. The concepts were selected to facilitate an in-depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, sensors, and artificial intelligence. While the cost-effectiveness of these facilities was not analyzed, both appear entirely feasible for the year 2000 timeframe.

  10. Derivation of space groups in mm2, 222 and mmm crystal classes

    Nigam, G.D.

    1987-08-01

    An algebraic approach is developed to derive space groups using 4x4 Seitz matrices for the crystal classes mm2, 222 and mmm in the orthorhombic system. The advantage of the present method is that it is relatively simple and can be adapted to introduce space groups to beginners. One of the advantages of the present method is that it admits a geometrical visualization of the symmetry elements of space group. The method can easily be extended to other crystal classes in a straightforward way. 16 refs, 1 fig., 2 tabs

  11. Uncertainty Quantification of the Reverse Taylor Impact Test and Localized Asynchronous Space-Time Algorithm

    Subber, Waad; Salvadori, Alberto; Lee, Sangmin; Matous, Karel

    2017-06-01

    The reverse Taylor impact is a common experiment to investigate the dynamical response of materials at high strain rates. To better understand the physical phenomena and to provide a platform for code validation and Uncertainty Quantification (UQ), a co-designed simulation and experimental paradigm is investigated. For validation under uncertainty, quantities of interest (QOIs) within subregions of the computational domain are introduced. For such simulations where regions of interest can be identified, the computational cost for UQ can be reduced by confining the random variability within these regions of interest. This observation inspired us to develop an asynchronous space and time computational algorithm with localized UQ. In the region of interest, the high resolution space and time discretization schemes are used for a stochastic model. Apart from the region of interest, low spatial and temporal resolutions are allowed for a stochastic model with low dimensional representation of uncertainty. The model is exercised on the linear elastodynamics and shows a potential in reducing the UQ computational cost. Although, we consider wave prorogation in solid, the proposed framework is general and can be used for fluid flow problems as well. Department of Energy, National Nuclear Security Administration (PSAAP-II).

  12. Review of the different methods to derive average spacing from resolved resonance parameters sets

    Fort, E.; Derrien, H.; Lafond, D.

    1979-12-01

    The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested

  13. A direct derivation of the exact Fisther information matrix of Gaussian vector state space models

    Klein, A.A.B.; Neudecker, H.

    2000-01-01

    This paper deals with a direct derivation of Fisher's information matrix of vector state space models for the general case, by which is meant the establishment of the matrix as a whole and not element by element. The method to be used is matrix differentiation, see [4]. We assume the model to be

  14. Space station automation study: Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    1984-01-01

    The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.

  15. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution

  16. Space-Time Fractional Diffusion-Advection Equation with Caputo Derivative

    José Francisco Gómez Aguilar

    2014-01-01

    Full Text Available An alternative construction for the space-time fractional diffusion-advection equation for the sedimentation phenomena is presented. The order of the derivative is considered as 0<β, γ≤1 for the space and time domain, respectively. The fractional derivative of Caputo type is considered. In the spatial case we obtain the fractional solution for the underdamped, undamped, and overdamped case. In the temporal case we show that the concentration has amplitude which exhibits an algebraic decay at asymptotically large times and also shows numerical simulations where both derivatives are taken in simultaneous form. In order that the equation preserves the physical units of the system two auxiliary parameters σx and σt are introduced characterizing the existence of fractional space and time components, respectively. A physical relation between these parameters is reported and the solutions in space-time are given in terms of the Mittag-Leffler function depending on the parameters β and γ. The generalization of the fractional diffusion-advection equation in space-time exhibits anomalous behavior.

  17. Scattering of targets over layered half space using a semi-analytic method in conjunction with FDTD algorithm.

    Cao, Le; Wei, Bing

    2014-08-25

    Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.

  18. How Similar Are Forest Disturbance Maps Derived from Different Landsat Time Series Algorithms?

    Warren B. Cohen

    2017-03-01

    Full Text Available Disturbance is a critical ecological process in forested systems, and disturbance maps are important for understanding forest dynamics. Landsat data are a key remote sensing dataset for monitoring forest disturbance and there recently has been major growth in the development of disturbance mapping algorithms. Many of these algorithms take advantage of the high temporal data volume to mine subtle signals in Landsat time series, but as those signals become subtler, they are more likely to be mixed with noise in Landsat data. This study examines the similarity among seven different algorithms in their ability to map the full range of magnitudes of forest disturbance over six different Landsat scenes distributed across the conterminous US. The maps agreed very well in terms of the amount of undisturbed forest over time; however, for the ~30% of forest mapped as disturbed in a given year by at least one algorithm, there was little agreement about which pixels were affected. Algorithms that targeted higher-magnitude disturbances exhibited higher omission errors but lower commission errors than those targeting a broader range of disturbance magnitudes. These results suggest that a user of any given forest disturbance map should understand the map’s strengths and weaknesses (in terms of omission and commission error rates, with respect to the disturbance targets of interest.

  19. An Improved Seeding Algorithm of Magnetic Flux Lines Based on Data in 3D Space

    Jia Zhong

    2015-05-01

    Full Text Available This paper will propose an approach to increase the accuracy and efficiency of seeding algorithms of magnetic flux lines in magnetic field visualization. To obtain accurate and reliable visualization results, the density of the magnetic flux lines should map the magnetic induction intensity, and seed points should determine the density of the magnetic flux lines. However, the traditional seeding algorithm, which is a statistical algorithm based on data, will produce errors when computing magnetic flux through subdivision of the plane. To achieve higher accuracy, more subdivisions should be made, which will reduce efficiency. This paper analyzes the errors made when the traditional seeding algorithm is used and gives an improved algorithm. It then validates the accuracy and efficiency of the improved algorithm by comparing the results of the two algorithms with results from the equivalent magnetic flux algorithm.

  20. A Markov Chain Monte Carlo version of the genetic algorithm Differential Evolution: easy Bayesian computing for real parameter spaces

    Braak, ter C.J.F.

    2006-01-01

    Differential Evolution (DE) is a simple genetic algorithm for numerical optimization in real parameter spaces. In a statistical context one would not just want the optimum but also its uncertainty. The uncertainty distribution can be obtained by a Bayesian analysis (after specifying prior and

  1. Phytoplankton global mapping from space with a support vector machine algorithm

    de Boissieu, Florian; Menkes, Christophe; Dupouy, Cécile; Rodier, Martin; Bonnet, Sophie; Mangeas, Morgan; Frouin, Robert J.

    2014-11-01

    In recent years great progress has been made in global mapping of phytoplankton from space. Two main trends have emerged, the recognition of phytoplankton functional types (PFT) based on reflectance normalized to chlorophyll-a concentration, and the recognition of phytoplankton size class (PSC) based on the relationship between cell size and chlorophyll-a concentration. However, PFTs and PSCs are not decorrelated, and one approach can complement the other in a recognition task. In this paper, we explore the recognition of several dominant PFTs by combining reflectance anomalies, chlorophyll-a concentration and other environmental parameters, such as sea surface temperature and wind speed. Remote sensing pixels are labeled thanks to coincident in-situ pigment data from GeP&CO, NOMAD and MAREDAT datasets, covering various oceanographic environments. The recognition is made with a supervised Support Vector Machine classifier trained on the labeled pixels. This algorithm enables a non-linear separation of the classes in the input space and is especially adapted for small training datasets as available here. Moreover, it provides a class probability estimate, allowing one to enhance the robustness of the classification results through the choice of a minimum probability threshold. A greedy feature selection associated to a 10-fold cross-validation procedure is applied to select the most discriminative input features and evaluate the classification performance. The best classifiers are finally applied on daily remote sensing datasets (SeaWIFS, MODISA) and the resulting dominant PFT maps are compared with other studies. Several conclusions are drawn: (1) the feature selection highlights the weight of temperature, chlorophyll-a and wind speed variables in phytoplankton recognition; (2) the classifiers show good results and dominant PFT maps in agreement with phytoplankton distribution knowledge; (3) classification on MODISA data seems to perform better than on SeaWIFS data

  2. Simulation of obstacles’ effect on industrial robots’ working space using genetic algorithm

    M.F. Aly

    2014-07-01

    Full Text Available The study of robot workspace is an interesting problem since its applications are directly related to industry. However, it involves several mathematical complications; Thus, many of the arising questions are left without a definite answer. With the motivation of industrial demand, the need for finding better answers than the existing ones lasts. The workspace (WS determination of a robot with general structural parameters is a complex problem, which cannot be solved in an explicit way. Closed form solutions are only available in some particular cases. Otherwise, computational algorithms and numerical techniques are used. The task becomes even much more complicated by the presence of obstacles in the robot accessible region. Obstacle presence does not only exclude points from the original WS but it affects the whole robot workspace’s shape and size to the extent that it sometimes divides the working space in two or more separate regions that cannot be linked by the same robot. Much research work in the literature is directed toward path planning in the presence of obstacles without having to determine the robot WS. However, a real situation in industry occurs when the knowledge of the WS is of importance in facility layout. This paper presents an approach for the estimation of a generic open-chain robot in the presence of obstacles with any desired number of prismatic and/or revolute joints of any order. Joints’ axes may have any orientation relative to each other. The robot can be placed in free space or in a work cell consisting of a set of Computer Numerically Controlled (CNC machines and some obstacles.

  3. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  4. ALGORITHM FOR GENERALIZED GARMAN EQUATION IN OPTION PRICING OF A FINANCIAL DERIVATIVES WITH STOCHASTIC VOLATILITY MODELS

    Maxim Ioan

    2009-05-01

    Full Text Available In our paper we build a reccurence from generalized Garman equation and discretization of 3-dimensional domain. From reccurence we build an algorithm for computing values of an option based on time, momentan volatility of support and value of support on a

  5. Time and Space Efficient Algorithms for Two-Party Authenticated Data Structures

    Papamanthou, Charalampos; Tamassia, Roberto

    Authentication is increasingly relevant to data management. Data is being outsourced to untrusted servers and clients want to securely update and query their data. For example, in database outsourcing, a client's database is stored and maintained by an untrusted server. Also, in simple storage systems, clients can store very large amounts of data but at the same time, they want to assure their integrity when they retrieve them. In this paper, we present a model and protocol for two-party authentication of data structures. Namely, a client outsources its data structure and verifies that the answers to the queries have not been tampered with. We provide efficient algorithms to securely outsource a skip list with logarithmic time overhead at the server and client and logarithmic communication cost, thus providing an efficient authentication primitive for outsourced data, both structured (e.g., relational databases) and semi-structured (e.g., XML documents). In our technique, the client stores only a constant amount of space, which is optimal. Our two-party authentication framework can be deployed on top of existing storage applications, thus providing an efficient authentication service. Finally, we present experimental results that demonstrate the practical efficiency and scalability of our scheme.

  6. A trust-based sensor allocation algorithm in cooperative space search problems

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  7. PSCAD modeling of a two-level space vector pulse width modulation algorithm for power electronics education

    Ahmet Mete Vural

    2016-09-01

    Full Text Available This paper presents the design details of a two-level space vector pulse width modulation algorithm in PSCAD that is able to generate pulses for three-phase two-level DC/AC converters with two different switching patterns. The presented FORTRAN code is generic and can be easily modified to meet many other kinds of space vector modulation strategies. The code is also editable for hardware programming. The new component is tested and verified by comparing its output as six gating signals with those of a similar component in MATLAB library. Moreover the component is used to generate digital signals for closed-loop control of STATCOM for reactive power compensation in PSCAD. This add-on can be an effective tool to give students better understanding of the space vector modulation algorithm for different control tasks in power electronics area, and can motivate them for learning.

  8. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  9. A satellite digital controller or 'play that PID tune again, Sam'. [Position, Integral, Derivative feedback control algorithm for design strategy

    Seltzer, S. M.

    1976-01-01

    The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.

  10. A Modified Groundwater Flow Model Using the Space Time Riemann-Liouville Fractional Derivatives Approximation

    Abdon Atangana

    2014-01-01

    Full Text Available The notion of uncertainty in groundwater hydrology is of great importance as it is known to result in misleading output when neglected or not properly accounted for. In this paper we examine this effect in groundwater flow models. To achieve this, we first introduce the uncertainties functions u as function of time and space. The function u accounts for the lack of knowledge or variability of the geological formations in which flow occur (aquifer in time and space. We next make use of Riemann-Liouville fractional derivatives that were introduced by Kobelev and Romano in 2000 and its approximation to modify the standard version of groundwater flow equation. Some properties of the modified Riemann-Liouville fractional derivative approximation are presented. The classical model for groundwater flow, in the case of density-independent flow in a uniform homogeneous aquifer is reformulated by replacing the classical derivative by the Riemann-Liouville fractional derivatives approximations. The modified equation is solved via the technique of green function and the variational iteration method.

  11. A Prognostic Scoring Tool for Cesarean Organ/Space Surgical Site Infections: Derivation and Internal Validation.

    Assawapalanggool, Srisuda; Kasatpibal, Nongyao; Sirichotiyakul, Supatra; Arora, Rajin; Suntornlimsiri, Watcharin

    Organ/space surgical site infections (SSIs) are serious complications after cesarean delivery. However, no scoring tool to predict these complications has yet been developed. This study sought to develop and validate a prognostic scoring tool for cesarean organ/space SSIs. Data for case and non-case of cesarean organ/space SSI between January 1, 2007 and December 31, 2012 from a tertiary care hospital in Thailand were analyzed. Stepwise multivariable logistic regression was used to select the best predictor combination and their coefficients were transformed to a risk scoring tool. The likelihood ratio of positive for each risk category and the area under receiver operating characteristic (AUROC) curves were analyzed on total scores. Internal validation using bootstrap re-sampling was tested for reproducibility. The predictors of 243 organ/space SSIs from 4,988 eligible cesarean delivery cases comprised the presence of foul-smelling amniotic fluid (four points), vaginal examination five or more times before incision (two points), wound class III or greater (two points), being referred from local setting (two points), hemoglobin less than 11 g/dL (one point), and ethnic minorities (one point). The likelihood ratio of cesarean organ/space SSIs with 95% confidence interval among low (total score of 0-1 point), medium (total score of 2-5 points), and high risk (total score of ≥6 points) categories were 0.11 (0.07-0.19), 1.03 (0.89-1.18), and 13.25 (10.87-16.14), respectively. Both AUROCs of the derivation and validation data were comparable (87.57% versus 86.08%; p = 0.418). This scoring tool showed a high predictive ability regarding cesarean organ/space SSIs on the derivation data and reproducibility was demonstrated on internal validation. It could assist practitioners prioritize patient care and management depending on risk category and decrease SSI rates in cesarean deliveries.

  12. 2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm

    Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.

    2017-08-01

    Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).

  13. Unified Framework for Deriving Simultaneous Equation Algorithms for Water Distribution Networks

    The known formulations for steady state hydraulics within looped water distribution networks are re-derived in terms of linear and non-linear transformations of the original set of partly linear and partly non-linear equations that express conservation of mass and energy. All of ...

  14. Algorithm for Financial Derivatives Evaluation in a Generalized Multi-Heston Model

    Dan Negura

    2013-02-01

    Full Text Available In this paper we show how could a financial derivative be estimated based on an assumed Multi-Heston model support.Keywords: Euler Maruyama discretization method, Monte Carlo simulation, Heston model, Double-Heston model, Multi-Heston model

  15. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  16. An improved single sensor parity space algorithm for sequential probability ratio test

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1995-12-01

    In our paper we propose a modification of the single sensor parity algorithm in order to make the statistical properties of the generated residual determinable in advance. The algorithm is tested via computer simulated ramp failure at the temperature readings of the pressurizer. (author).

  17. Algorithms for polynomial spectral factorization and bounded-real balanced state space representations

    Rapisarda, P.; Trentelman, H.L.; Minh, H.B.

    We illustrate an algorithm that starting from the image representation of a strictly bounded-real system computes a minimal balanced state variable, from which a minimal balanced state realization is readily obtained. The algorithm stems from an iterative procedure to compute a storage function,

  18. Algoritmo para el cálculo de la transformada Z inversa utilizando DERIVE // Algorithm for the calculation of the transformed inverse Z using the mathematical assistant DERIVE.

    D. Galán Martínez

    2000-07-01

    Full Text Available Una de las herramientas matemáticas más utilizadas en ingeniería en el estudio de los denominados sistemas de control dedatos muestreados es la transformada Z. La transformada Z como método operacional puede ser utilizada en la resoluciónde ecuaciones en diferencias finitas; las cuales formulan la dinámica de los sistemas de control de datos muestreados. Estatransformada juega un papel similar que el de la transformada de Laplace en el análisis de los sistemas de control de tiempocontinuo.El presente trabajo tiene como objetivo la confección de un programa para computadora digital, utilizando el asistentematemático DERIVE, para la determinación de la transformada Z inversa de una función algebraica racional, las cualesmodelan matemáticamente los sistemas de control de datos muestreados lineales que aparecen con mucha frecuencia en elestudio de los procesos de ingeniería.Palabras claves: Algoritmo, transformada Z, DERIVE, función algebraica racional, modelo matemático._______________________________________________________________________AbstractOne of the mathematical tools more used in engineering in the study of the denominated systems of data control samples isthe transformed Z. The transformed Z like as an operational method can be used in the resolution of equations in finitedifferences; which formulate the dynamics of the systems of data control samples. This transformed plays a similar paperthat the Laplace transformed in the analysis of the systems of control in continuous time.The present work has as objective the confection of a program for digital computer, using the mathematical assistantDERIVES, for the determination of the Z inverse transformed of a rational algebraic function, which model mathematicallythe systems of lineal data control samples that appear very frecuently in the study of the engineering processesKey words: algorithm, Z inverse transformed, Derives, Digital computer program, Rational

  19. Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm

    Offer Rozenstein

    2014-03-01

    Full Text Available Land surface temperature (LST is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS. This paper presents an adjustment of the split window algorithm (SWA for TIRS that uses atmospheric transmittance and land surface emissivity (LSE as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.

  20. A brief summary on formalizing parallel tensor distributions redistributions and algorithm derivations.

    Schatz, Martin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kolda, Tamara G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); van de Geijn, Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Large-scale datasets in computational chemistry typically require distributed-memory parallel methods to perform a special operation known as tensor contraction. Tensors are multidimensional arrays, and a tensor contraction is akin to matrix multiplication with special types of permutations. Creating an efficient algorithm and optimized im- plementation in this domain is complex, tedious, and error-prone. To address this, we develop a notation to express data distributions so that we can apply use automated methods to find optimized implementations for tensor contractions. We consider the spin-adapted coupled cluster singles and doubles method from computational chemistry and use our methodology to produce an efficient implementation. Experiments per- formed on the IBM Blue Gene/Q and Cray XC30 demonstrate impact both improved performance and reduced memory consumption.

  1. Total ozone column derived from GOME and SCIAMACHY using KNMI retrieval algorithms: Validation against Brewer measurements at the Iberian Peninsula

    Antón, M.; Kroon, M.; López, M.; Vilaplana, J. M.; Bañón, M.; van der A, R.; Veefkind, J. P.; Stammes, P.; Alados-Arboledas, L.

    2011-11-01

    This article focuses on the validation of the total ozone column (TOC) data set acquired by the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite remote sensing instruments using the Total Ozone Retrieval Scheme for the GOME Instrument Based on the Ozone Monitoring Instrument (TOGOMI) and Total Ozone Retrieval Scheme for the SCIAMACHY Instrument Based on the Ozone Monitoring Instrument (TOSOMI) retrieval algorithms developed by the Royal Netherlands Meteorological Institute. In this analysis, spatially colocated, daily averaged ground-based observations performed by five well-calibrated Brewer spectrophotometers at the Iberian Peninsula are used. The period of study runs from January 2004 to December 2009. The agreement between satellite and ground-based TOC data is excellent (R2 higher than 0.94). Nevertheless, the TOC data derived from both satellite instruments underestimate the ground-based data. On average, this underestimation is 1.1% for GOME and 1.3% for SCIAMACHY. The SCIAMACHY-Brewer TOC differences show a significant solar zenith angle (SZA) dependence which causes a systematic seasonal dependence. By contrast, GOME-Brewer TOC differences show no significant SZA dependence and hence no seasonality although processed with exactly the same algorithm. The satellite-Brewer TOC differences for the two satellite instruments show a clear and similar dependence on the viewing zenith angle under cloudy conditions. In addition, both the GOME-Brewer and SCIAMACHY-Brewer TOC differences reveal a very similar behavior with respect to the satellite cloud properties, being cloud fraction and cloud top pressure, which originate from the same cloud algorithm (Fast Retrieval Scheme for Clouds from the Oxygen A-Band (FRESCO+)) in both the TOSOMI and TOGOMI retrieval algorithms.

  2. Algorithm Development and Validation for Satellite-Derived Distributions of DOC and CDOM in the US Middle Atlantic Bight

    Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.

    2007-01-01

    In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.

  3. An Efficient Return Algorithm For Non-Associated Plasticity With Linear Yield Criteria In Principal Stress Space

    Clausen, Johan Christian; Damkilde, Lars; Andersen, Lars

    2007-01-01

    . The stress return and the formation of the constitutive matrix is carried out in principal stress space. Here the manipulations simplify and rely on geometrical arguments. The singularities arising at the intersection of yield planes are dealt with in a straightforward way also based on geometrical......An efficient return algorithm for stress update in numerical plasticity computations is presented. The yield criterion must be linear in principal stress space and can be composed of any number of yield planes. Each of these yield planes may have an associated or non-associated flow rule...

  4. Automated guidance algorithms for a space station-based crew escape vehicle.

    Flanary, R; Hammen, D G; Ito, D; Rabalais, B W; Rishikof, B H; Siebold, K H

    2003-04-01

    An escape vehicle was designed to provide an emergency evacuation for crew members living on a space station. For maximum escape capability, the escape vehicle needs to have the ability to safely evacuate a station in a contingency scenario such as an uncontrolled (e.g., tumbling) station. This emergency escape sequence will typically be divided into three events: The first separation event (SEP1), the navigation reconstruction event, and the second separation event (SEP2). SEP1 is responsible for taking the spacecraft from its docking port to a distance greater than the maximum radius of the rotating station. The navigation reconstruction event takes place prior to the SEP2 event and establishes the orbital state to within the tolerance limits necessary for SEP2. The SEP2 event calculates and performs an avoidance burn to prevent station recontact during the next several orbits. This paper presents the tools and results for the whole separation sequence with an emphasis on the two separation events. The first challenge includes collision avoidance during the escape sequence while the station is in an uncontrolled rotational state, with rotation rates of up to 2 degrees per second. The task of avoiding a collision may require the use of the Vehicle's de-orbit propulsion system for maximum thrust and minimum dwell time within the vicinity of the station vicinity. The thrust of the propulsion system is in a single direction, and can be controlled only by the attitude of the spacecraft. Escape algorithms based on a look-up table or analytical guidance can be implemented since the rotation rate and the angular momentum vector can be sensed onboard and a-priori knowledge of the position and relative orientation are available. In addition, crew intervention has been provided for in the event of unforeseen obstacles in the escape path. The purpose of the SEP2 burn is to avoid re-contact with the station over an extended period of time. Performing this maneuver requires

  5. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  6. Climatology 2011: An MLS and Sonde Derived Ozone Climatology for Satellite Retrieval Algorithms

    McPeters, Richard D.; Labow, Gordon J.

    2012-01-01

    The ozone climatology used as the a priori for the version 8 Solar Backscatter Ultraviolet (SBUV) retrieval algorithms has been updated. The Microwave Limb Sounder (MLS) instrument on Aura has excellent latitude coverage and measures ozone daily from the upper troposphere to the lower mesosphere. The new climatology consists of monthly average ozone profiles for ten degree latitude zones covering pressure altitudes from 0 to 65 km. The climatology was formed by combining data from Aura MLS (2004-2010) with data from balloon sondes (1988-2010). Ozone below 8 km (below 12 km at high latitudes) is based on balloons sondes, while ozone above 16 km (21 km at high latitudes) is based on MLS measurements. Sonde and MLS data are blended in the transition region. Ozone accuracy in the upper troposphere is greatly improved because of the near uniform coverage by Aura MLS, while the addition of a large number of balloon sonde measurements improves the accuracy in the lower troposphere, in the tropics and southern hemisphere in particular. The addition of MLS data also improves the accuracy of climatology in the upper stratosphere and lower mesosphere. The revised climatology has been used for the latest reprocessing of SBUV and TOMS satellite ozone data.

  7. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-01-01

    Motivation: UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. Application: We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without ex...

  8. Multiobjective Economic Load Dispatch in 3-D Space by Genetic Algorithm

    Jain, N. K.; Nangia, Uma; Singh, Iqbal

    2017-10-01

    This paper presents the application of genetic algorithm to Multiobjective Economic Load Dispatch (MELD) problem considering fuel cost, transmission losses and environmental pollution as objective functions. The MELD problem has been formulated using constraint method. The non-inferior set for IEEE 5, 14 and 30-bus system has been generated by using genetic algorithm and the target point has been obtained by using maximization of minimum relative attainments.

  9. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    Shaeen Kalathil

    2015-11-01

    Full Text Available This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB using canonic signed digit (CSD coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  10. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  11. 4D-Qsar Study of Some Pyrazole Pyridine Carboxylic Acid Derivatives by Electron Conformational-Genetic Algorithm Method.

    Tuzun, Burak; Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin

    2018-05-13

    In the present work, pharmacophore identification and biological activity prediction for 86 pyrazole pyridine carboxylic acid derivatives were made using the electron conformational genetic algorithm approach which was introduced as a 4D-QSAR analysis by us in recent years. In the light of the data obtained from quantum chemical calculations at HF/6-311 G** level, the electron conformational matrices of congruity (ECMC) were constructed by EMRE software. Comparing the matrices, electron conformational submatrix of activity (ECSA, Pha) was revealed that are common for these compounds within a minimum tolerance. A parameter pool was generated considering the obtained pharmacophore. To determine the theoretical biological activity of molecules and identify the best subset of variables affecting bioactivities, we used the nonlinear least square regression method and genetic algorithm. The results obtained in this study are in good agreement with the experimental data presented in the literature. The model for training and test sets attained by the optimum 12 parameters gave highly satisfactory results with R2training= 0.889, q2=0.839 and SEtraining=0.066, q2ext1 = 0.770, q2ext2 = 0.750, q2ext3=0.824, ccctr = 0.941, ccctest = 0.869 and cccall = 0.927. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms

    Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.

    2013-08-01

    We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.

  13. Examination of Regional Trends in Cloud Properties over Surface Sites Derived from MODIS and AVHRR using the CERES Cloud Algorithm

    Smith, W. L., Jr.; Minnis, P.; Bedka, K. M.; Sun-Mack, S.; Chen, Y.; Doelling, D. R.; Kato, S.; Rutan, D. A.

    2017-12-01

    Recent studies analyzing long-term measurements of surface insolation at ground sites suggest that decadal-scale trends of increasing (brightening) and decreasing (dimming) downward solar flux have occurred at various times over the last century. Regional variations have been reported that range from near 0 Wm-2/decade to as large as 9 Wm-2/decade depending on the location and time period analyzed. The more significant trends have been attributed to changes in overhead clouds and aerosols, although quantifying their relative impacts using independent observations has been difficult, owing in part to a lack of consistent long-term measurements of cloud properties. This paper examines new satellite based records of cloud properties derived from MODIS (2000-present) and AVHRR (1981- present) data to infer cloud property trends over a number of surface radiation sites across the globe. The MODIS cloud algorithm was developed for the NASA Clouds and the Earth's Radiant Energy System (CERES) project to provide a consistent record of cloud properties to help improve broadband radiation measurements and to better understand cloud radiative effects. The CERES-MODIS cloud algorithm has been modified to analyze other satellites including the AVHRR on the NOAA satellites. Compared to MODIS, obtaining consistent cloud properties over a long period from AVHRR is a much more significant challenge owing to the number of different satellites, instrument calibration uncertainties, orbital drift and other factors. Nevertheless, both the MODIS and AVHRR cloud properties will be analyzed to determine trends, and their level of consistency and correspondence with surface radiation trends derived from the ground-based radiometer data. It is anticipated that this initial study will contribute to an improved understanding of surface solar radiation trends and their relationship to clouds.

  14. Evaluation of Fog and Low Stratus Cloud Microphysical Properties Derived from In Situ Sensor, Cloud Radar and SYRSOC Algorithm

    Jean-Charles Dupont

    2018-05-01

    Full Text Available The microphysical properties of low stratus and fog are analyzed here based on simultaneous measurement of an in situ sensor installed on board a tethered balloon and active remote-sensing instruments deployed at the Instrumented Site for Atmospheric Remote Sensing Research (SIRTA observatory (south of Paris, France. The study focuses on the analysis of 3 case studies where the tethered balloon is deployed for several hours in order to derive the relationship between liquid water content (LWC, effective radius (Re and cloud droplet number concentration (CDNC measured by a light optical aerosol counter (LOAC in situ granulometer and Bistatic Radar System for Atmospheric Studies (BASTA cloud radar reflectivity. The well-known relationship Z = α × (LWCβ has been optimized with α ϵ [0.02, 0.097] and β ϵ [1.91, 2.51]. Similar analysis is done to optimize the relationship Re = f(Z and CDNC = f(Z. Two methodologies have been applied to normalize the particle-size distribution measured by the LOAC granulometer with a visible extinction closure (R² ϵ [0.73, 0.93] and to validate the LWC profile with a liquid water closure using the Humidity and Temperature Profiler (HATPRO microwave radiometer (R² ϵ [0.83, 0.91]. In a second step, these relationships are used to derive spatial and temporal variability of the vertical profile of LWC, Re and CDNC starting from BASTA measurement. Finally, the synergistic remote sensing of clouds (SYRSOC algorithm has been tested on three tethered balloon flights. Generally, SYRSOC CDNC and Re profiles agreed well with LOAC in situ and BASTA profiles for the studied fog layers. A systematic overestimation of LWC by SYRSOC in the top half of the fog layer was found due to fog processes that are not accounted for in the cloud algorithm SYRSOC.

  15. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  16. The global kernel k-means algorithm for clustering in feature space.

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  17. A Novel Algorithm for Color Space Conversion Model from CMYK to LAB

    Juan-li Hu; Jia-bing Deng; Shan-shan Zou

    2010-01-01

    Color space conversion has become a very important role in the image acquisition, display and the transmission of the color information in the replication. Printers and Image setter express color by CMYK space. But if color is edited and corrected in the CMYK space, it will cause a greater loss of color, and the Computing of computer will also be slowed down. So it often needs to be converted to a LAB uniform color space. Among the previous conversion methods, there is a widespread problem th...

  18. Algorithms for Brownian first-passage-time estimation

    Adib, Artur B.

    2009-09-01

    A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.

  19. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  20. An efficient algorithm for 3D space time kinetics simulations for large PHWRs

    Jain, Ishi; Fernando, M.P.S.; Kumar, A.N.

    2012-01-01

    In nuclear reactor physics and allied areas like shielding, various forms of neutron transport equation or its approximation namely the diffusion equation have to be solved to estimate neutron flux distribution. This paper presents an efficient algorithm yielding accurate results along with promising gain in computational work. (author)

  1. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    Chen, Bin; Maddumage, Prasad [Research Computing Center, Department of Scientific Computing, Florida State University, Tallahassee, FL 32306 (United States); Kantowski, Ronald; Dai, Xinyu; Baron, Eddie, E-mail: bchen3@fsu.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  2. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-01-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python

  3. Optimal trajectory planning of free-floating space manipulator using differential evolution algorithm

    Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping

    2018-03-01

    The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.

  4. SP_Ace: a new code to derive stellar parameters and elemental abundances

    Boeche, C.; Grebel, E. K.

    2016-03-01

    Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters

  5. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of

  6. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  7. Derivation of a configuration space Hamiltonian for heavy atoms: three body potentials

    Mittleman, M.H.

    1981-01-01

    A brief history of the difficulties associated with the derivation of a configuration space Hamiltonian is presented. One of the problems encountered is the definition of the projection operators which must occur. A variational definition is obtained and, with simplifying assumptions, the optimum projection operators are those which project onto Hartree-Fock orbitals. This puts many previously performed numerical calculations on a firm footing. The form of the two body interactions is discussed in the context of the gauge freedom. The Coulomb gauge is the favored one but it is pointed out that it has never been proven to be the best one. Finally a form for the relativistic three election potential is given and the possibility of its observation is discussed

  8. A new visco-elasto-plastic model via time-space fractional derivative

    Hei, X.; Chen, W.; Pang, G.; Xiao, R.; Zhang, C.

    2018-02-01

    To characterize the visco-elasto-plastic behavior of metals and alloys we propose a new constitutive equation based on a time-space fractional derivative. The rheological representative of the model can be analogous to that of the Bingham-Maxwell model, while the dashpot element and sliding friction element are replaced by the corresponding fractional elements. The model is applied to describe the constant strain rate, stress relaxation and creep tests of different metals and alloys. The results suggest that the proposed simple model can describe the main characteristics of the experimental observations. More importantly, the model can also provide more accurate predictions than the classic Bingham-Maxwell model and the Bingham-Norton model.

  9. AMTD: update of engineering specifications derived from science requirements for future UVOIR space telescopes

    Stahl, H. Philip; Postman, Marc; Mosier, Gary; Smith, W. Scott; Blaurock, Carl; Ha, Kong; Stark, Christopher C.

    2014-08-01

    The Advance Mirror Technology Development (AMTD) project is in Phase 2 of a multiyear effort, initiated in FY12, to mature by at least a half TRL step six critical technologies required to enable 4 meter or larger UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND provide a high-performance low-cost low-risk system. To give the science community options, we are pursuing multiple technology paths. A key task is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints. A key finding of this effort is that the science requires an 8 meter or larger aperture telescope.

  10. Synthesis of Greedy Algorithms Using Dominance Relations

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  11. An image-space parallel convolution filtering algorithm based on shadow map

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  12. Space-Derived Imagery and a Commercial Remote Sensing Industry: Impossible Dream or Inevitable Reality?

    Murray, Felsher

    Landsat-1 was launched in 1972 as a research satellite. Many of us viewed this satellite as a precursor to remote sensing "commercialization." Indeed since that time, the birth, growth and maturation of a remote sensing "industry" has been an ongoing objective for much of the U.S. private sector engaged in space and ground-segment activities related to the acquisition, analysis, and dissemination of imagery. In September 1999 a U.S. commercial entity, Space Imaging, Inc. launched its 1-meter pan/4-meter multispectral IKONOS sensor. DigitalGlobe, Inc. (nee EarthWatch, Inc.) matched this feat in October 2001. Thus, a full 30 years later, we are finally on the brink of building a true remote sensing information industry based on the global availability of competitively-priced space- derived imagery of the Earth. The upcoming availability of similar imagery from non-U.S. sources as ImageSat and U.S. sources as ORBIMAGE will only strengthen that reality. However, a remote sensing industry can only grow by allowing these entities (in times of peace) unencumbered access to a world market. And that market continues to expand -- up 11% in 2001, with gross revenues of U.S. commercial remote sensing firms alone reaching 2.44 billion, according to a joint NASA/ASPRS industry survey. However, the 30-year gap between the research-labeled Landsat-1 and our current commercial successes was not technology-driven. That lacuna was purely political -- driven by valid concerns related to national security. Although the world's governments have cooperated thoroughly and completely in areas related to satellite telecommunications, cooperation in space-derived image information is still today done cautiously and on a case-by-case basis -- and then only for science- based undertakings. It is still a fact that, except for the United States, all other Earth-imaging satellites/sensors flying today are owned, operated, and their products disseminated, by national governments -- and not private

  13. A Wire Antenna Designed for Space Wave Radiation Over the Earth Using a Genetic Algorithm

    1997-12-01

    algorithms. With the assistance of two Air Force Institute of Technology (AFIT) Ph.D. students, Dave Van Veldhuizen and Robert Marmelstein, the second of the...Laboratory, 1992, UCRL-MA-109338 Pt.I. 41. David A. Van Veldhuizen , Brian S. Sandlin, Robert E. Marmelstein, Gary B. Lamont, and Andrew J. Terzuoli...Dr. Ben Munk, Dr. Gerald Burke, and Dr. Ed Altshuler for their kind assistance and for answering my questions. Finally, I would like to thank Dave Van

  14. A Receiver for Differential Space-Time -Shifted BPSK Modulation Based on Scalar-MSDD and the EM Algorithm

    Kim Jae H

    2005-01-01

    Full Text Available In this paper, we consider the issue of blind detection of Alamouti-type differential space-time (ST modulation in static Rayleigh fading channels. We focus our attention on a -shifted BPSK constellation, introducing a novel transformation to the received signal such that this binary ST modulation, which has a second-order transmit diversity, is equivalent to QPSK modulation with second-order receive diversity. This equivalent representation allows us to apply a low-complexity detection technique specifically designed for receive diversity, namely, scalar multiple-symbol differential detection (MSDD. To further increase receiver performance, we apply an iterative expectation-maximization (EM algorithm which performs joint channel estimation and sequence detection. This algorithm uses minimum mean square estimation to obtain channel estimates and the maximum-likelihood principle to detect the transmitted sequence, followed by differential decoding. With receiver complexity proportional to the observation window length, our receiver can achieve the performance of a coherent maximal ratio combining receiver (with differential decoding in as few as a single EM receiver iteration, provided that the window size of the initial MSDD is sufficiently long. To further demonstrate that the MSDD is a vital part of this receiver setup, we show that an initial ST conventional differential detector would lead to strange convergence behavior in the EM algorithm.

  15. Multigrid Algorithms for the Fast Calculation of Space-Charge Effects in Accelerator Design

    Pöplau, G.; Rienen, van U.; Geer, van der S.B.; Loos, de M.J.

    2004-01-01

    Numerical prediction of charged particle dynamics in accelerators is essential for the design and understanding of these machines. Methods to calculate the self-fields of the bunch, the so-called space-charge forces, become increasingly important as the demand for high-quality bunches increases. We

  16. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  17. A dynamic programming algorithm for the space allocation and aisle positioning problem

    Bodnar, Peter; Lysgaard, Jens

    2014-01-01

    The space allocation and aisle positioning problem (SAAPP) in a material handling system with gravity flow racks is the problem of minimizing the total number of replenishments over a period subject to practical constraints related to the need for aisles granting safe and easy access to storage...

  18. Improved identification of the solution space of aerosol microphysical properties derived from the inversion of profiles of lidar optical data, part 3: case studies.

    Kolgotin, Alexei; Müller, Detlef; Chemyakin, Eduard; Romanov, Anton; Alehnovich, Valentin

    2018-04-01

    We conclude our series of publications on the development of the gradient correlation method (GCM), which can be used for an improved stabilization of the solution space of particle microphysical parameters derived from measurements with multiwavelength Raman and high-spectral-resolution lidar (3 backscatter +2 extinction coefficients). We show results of three cases studies. The data were taken with a ground-based multiwavelength Raman lidar during the Saharan Mineral Dust Experiment in the Cape Verde Islands (North Atlantic). These cases describe mixtures of dust with smoke. For our data analysis we separated the contribution of smoke to the total signal and only used these optical profiles for the test of GCM. The results show a significant stabilization of the solution space of the particle microphysical parameter retrieval on the particle radius domain from 0.03 to 10 μm, the real part of the complex refractive index domain from 1.3 to 1.8, and the imaginary part from 0 to 0.1. This new method will be included in the Tikhonov Advanced Regularization Algorithm, which is a fully automated, unsupervised algorithm that is used for the analysis of data collected with the worldwide first airborne 3 backscatter +2 extinction high-spectral-resolution lidar developed by NASA Langley Research Center.

  19. Innovative GOCI algorithm to derive turbidity in highly turbid waters: a case study in the Zhejiang coastal area.

    Qiu, Zhongfeng; Zheng, Lufei; Zhou, Yan; Sun, Deyong; Wang, Shengqiang; Wu, Wei

    2015-09-21

    An innovative algorithm is developed and validated to estimate the turbidity in Zhejiang coastal area (highly turbid waters) using data from the Geostationary Ocean Color Imager (GOCI). First, satellite-ground synchronous data (n = 850) was collected from 2014 to 2015 using 11 buoys equipped with a Yellow Spring Instrument (YSI) multi-parameter sonde capable of taking hourly turbidity measurements. The GOCI data-derived Rayleigh-corrected reflectance (R(rc)) was used in place of the widely used remote sensing reflectance (R(rs)) to model turbidity. Various band characteristics, including single band, band ratio, band subtraction, and selected band combinations, were analyzed to identify correlations with turbidity. The results indicated that band 6 had the closest relationship to turbidity; however, the combined bands 3 and 6 model simulated turbidity most accurately (R(2) = 0.821, pcoastal waters is feasible. As an example, the developed model was applied to 8 hourly GOCI images on 30 December 2014. Three cross sections were selected to identify the spatiotemporal variation of turbidity in the study area. Turbidity generally decreased from near-shore to offshore and from morning to afternoon. Overall, the findings of this study provide a simple and practical method, based on GOCI data, to estimate turbidity in highly turbid coastal waters at high temporal resolutions.

  20. New algorithms for parallel MRI

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  1. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial

  2. Strong Convergence Iterative Algorithms for Equilibrium Problems and Fixed Point Problems in Banach Spaces

    Shenghua Wang

    2013-01-01

    Full Text Available We first introduce the concept of Bregman asymptotically quasinonexpansive mappings and prove that the fixed point set of this kind of mappings is closed and convex. Then we construct an iterative scheme to find a common element of the set of solutions of an equilibrium problem and the set of common fixed points of a countable family of Bregman asymptotically quasinonexpansive mappings in reflexive Banach spaces and prove strong convergence theorems. Our results extend the recent ones of some others.

  3. An analytic algorithm for the space-time fractional reaction-diffusion equation

    M. G. Brikaa

    2015-11-01

    Full Text Available In this paper, we solve the space-time fractional reaction-diffusion equation by the fractional homotopy analysis method. Solutions of different examples of the reaction term will be computed and investigated. The approximation solutions of the studied models will be put in the form of convergent series to be easily computed and simulated. Comparison with the approximation solution of the classical case of the studied modeled with their approximation errors will also be studied.

  4. A Robust Planning Algorithm for Groups of Entities in Discrete Spaces

    Igor Wojnicki

    2015-07-01

    Full Text Available Automated planning is a well-established field of artificial intelligence (AI, with applications in route finding, robotics and operational research, among others. The task of developing a plan is often solved by finding a path in a graph representing the search domain; a robust plan consists of numerous paths that can be chosen if the execution of the best (optimal one fails. While robust planning for a single entity is rather simple, development of a robust plan for multiple entities in a common environment can lead to combinatorial explosion. This paper proposes a novel hybrid approach, joining heuristic search and the wavefront algorithm to provide a plan featuring robustness in areas where it is needed, while maintaining a low level of computational complexity.

  5. A space-time hybrid hourly rainfall model for derived flood frequency analysis

    U. Haberlandt

    2008-12-01

    Full Text Available For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series.

    First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in

  6. Observation planning algorithm of a Japanese space-borne sensor: Hyperspectral Imager SUIte (HISUI) onboard International Space Station (ISS) as platform

    Ogawa, Kenta; Konno, Yukiko; Yamamoto, Satoru; Matsunaga, Tsuneo; Tachikawa, Tetsushi; Komoda, Mako

    2017-09-01

    Hyperspectral Imager Suite (HISUI) is a Japanese future space-borne hyperspectral instrument being developed by Ministry of Economy, Trade, and Industry (METI). HISUI will be launched in 2019 or later onboard International Space Station (ISS) as platform. HISUI has 185 spectral band from 0.4 to 2.5 μm with 20 by 30 m spatial resolution with swath of 20 km. Swath is limited as such, however observations in continental scale area are requested in HISUI mission lifetime of three years. Therefore we are developing a scheduling algorithm to generate effective observation plans. HISUI scheduling algorithm is to generate observation plans automatically based on platform orbit, observation area maps (we say DAR; "Data Acquisition Request" in HISUI project), their priorities, and available resources and limitation of HISUI system such as instrument operation time per orbit and data transfer capability. Then next we need to set adequate DAR before start of HISUI observation, because years of observations are needed to cover continental scale wide area that is difficult to change after the mission started. To address these issues, we have developed observation simulator. The simulator's critical inputs are DAR and the ISS's orbit, HISUI limitations in observation minutes per orbit, data storage and past cloud coverage data for term of HISUI observations (3 years). Then the outputs of simulator are coverage map of each day. Areas with cloud free image are accumulated for the term of observation up to three years. We have successfully tested the simulator and tentative DAR and found that it is possible to estimate coverage for each of requests for the mission lifetime.

  7. The projection operator in a Hilbert space and its directional derivative. Consequences for the theory of projected dynamical systems

    George Isac

    2004-01-01

    Full Text Available In the first part of this paper we present a representation theorem for the directional derivative of the metric projection operator in an arbitrary Hilbert space. As a consequence of the representation theorem, we present in the second part the development of the theory of projected dynamical systems in infinite dimensional Hilbert space. We show that this development is possible if we use the viable solutions of differential inclusions. We use also pseudomonotone operators.

  8. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be

  9. Synchronized Scheme of Continuous Space-Vector PWM with the Real-Time Control Algorithms

    Oleschuk, V.; Blaabjerg, Frede

    2004-01-01

    This paper describes in details the basic peculiarities of a new method of feedforward synchronous pulsewidth modulation (PWM) of three-phase voltage source inverters for adjustable speed ac drives. It is applied to a continuous scheme of voltage space vector modulation. The method is based...... their position inside clock-intervals. In order to provide smooth shock-less pulse-ratio changing and quarter-wave symmetry of the voltage waveforms, special synchronising signals are formed on the boundaries of the 60 clock-intervals. The process of gradual transition from continuous to discontinuous...

  10. Blast resistance of space-induced variants derived from rice cultivar Hanghui 7

    Zhang Jingxin; Yang Qiyun; Zhu Xiaoyuan; Wang Hui; Zeng Liexian; Liu Yongzhu; Guo Tao; Chen Zhiqiang

    2010-01-01

    To screen the resistance lines to rice blast, the blast resistance of SP 3 and SP 4 progenies derived from rice variety Hanghui 7 were evaluated after satellite flight, and the genomic DNA polymorphism of the resistant variants selected from SP 3 was compared with the wild type by microsatellite markers. The results indicated that the SP 3 Variant line H24, which was selected from the 250 space-induced lines ( SP 3 ) with excellent agronomic and economical characters, showed resistance segregation (119R : 108S) against blast isolate GD3286. It was demonstrated that the resistance of H24 might be controlled by two dominant and complementary resistance genes. The resistance of H24 was still segregated in SP 4 , but the resistance spectrum of H24 was 84. 4% in SP 5 , much higher than the wild type, 40. 6%, and H24 especially showed resistant against some blast isolates of broad pathogenic spectrum or specialized pathogenicity; further more, the DNA polymorphism wasn't detected between H24 and its wild type by 229 SSR (simple sequence repeat) markers covering the rice genome equally. (authors)

  11. Validation of Cloud Parameters Derived from Geostationary Satellites, AVHRR, MODIS, and VIIRS Using SatCORPS Algorithms

    Minnis, P.; Sun-Mack, S.; Bedka, K. M.; Yost, C. R.; Trepte, Q. Z.; Smith, W. L., Jr.; Painemal, D.; Chen, Y.; Palikonda, R.; Dong, X.; hide

    2016-01-01

    Validation is a key component of remote sensing that can take many different forms. The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) is applied to many different imager datasets including those from the geostationary satellites, Meteosat, Himiwari-8, INSAT-3D, GOES, and MTSAT, as well as from the low-Earth orbiting satellite imagers, MODIS, AVHRR, and VIIRS. While each of these imagers have similar sets of channels with wavelengths near 0.65, 3.7, 11, and 12 micrometers, many differences among them can lead to discrepancies in the retrievals. These differences include spatial resolution, spectral response functions, viewing conditions, and calibrations, among others. Even when analyzed with nearly identical algorithms, it is necessary, because of those discrepancies, to validate the results from each imager separately in order to assess the uncertainties in the individual parameters. This paper presents comparisons of various SatCORPS-retrieved cloud parameters with independent measurements and retrievals from a variety of instruments. These include surface and space-based lidar and radar data from CALIPSO and CloudSat, respectively, to assess the cloud fraction, height, base, optical depth, and ice water path; satellite and surface microwave radiometers to evaluate cloud liquid water path; surface-based radiometers to evaluate optical depth and effective particle size; and airborne in-situ data to evaluate ice water content, effective particle size, and other parameters. The results of comparisons are compared and contrasted and the factors influencing the differences are discussed.

  12. Solution for the multigroup neutron space kinetics equations by the modified Picard algorithm

    Tavares, Matheus G.; Petersen, Claudio Z., E-mail: matheus.gulartetavares@gmail.com [Universidade Federal de Pelotas (UFPEL), Capao do Leao, RS (Brazil). Departamento de Matematica e Estatistica; Schramm, Marcelo, E-mail: schrammmarcelo@gmail.com [Universidade Federal de Pelotas (UFPEL), RS (Brazil). Centro de Engenharias; Zanette, Rodrigo, E-mail: rodrigozanette@hotmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Instituto de Matematica e Estatistica

    2017-07-01

    In this work, we used a modified Picards method to solve the Multigroup Neutron Space Kinetics Equations (MNSKE) in Cartesian geometry. The method consists in assuming an initial guess for the neutron flux and using it to calculate a fictitious source term in the MNSKE. A new source term is calculated applying its solution, and so on, iteratively, until a stop criterion is satisfied. For the solution of the fast and thermal neutron fluxes equations, the Laplace Transform technique is used in time variable resulting in a rst order linear differential matrix equation, which are solved by classical methods in the literature. After each iteration, the scalar neutron flux and the delayed neutron precursors are reconstructed by polynomial interpolation. We obtain the fluxes and precursors through Numerical Inverse Laplace Transform using the Stehfest method. We present numerical simulations and comparisons with available results in literature. (author)

  13. Solution for the multigroup neutron space kinetics equations by the modified Picard algorithm

    Tavares, Matheus G.; Petersen, Claudio Z.; Schramm, Marcelo; Zanette, Rodrigo

    2017-01-01

    In this work, we used a modified Picards method to solve the Multigroup Neutron Space Kinetics Equations (MNSKE) in Cartesian geometry. The method consists in assuming an initial guess for the neutron flux and using it to calculate a fictitious source term in the MNSKE. A new source term is calculated applying its solution, and so on, iteratively, until a stop criterion is satisfied. For the solution of the fast and thermal neutron fluxes equations, the Laplace Transform technique is used in time variable resulting in a rst order linear differential matrix equation, which are solved by classical methods in the literature. After each iteration, the scalar neutron flux and the delayed neutron precursors are reconstructed by polynomial interpolation. We obtain the fluxes and precursors through Numerical Inverse Laplace Transform using the Stehfest method. We present numerical simulations and comparisons with available results in literature. (author)

  14. Which benefits and limits derive from ESA membership for European Countries owning ;medium-sized; space agencies?

    Petroni, Giorgio; Bigliardi, Barbara; Galati, Francesco; Petroni, Alberto

    2018-01-01

    This study investigates the benefits and limits deriving from membership with ESA of six medium-sized space agencies in terms of strengthening and development (or not) of space technologies, as well as their contribution to the growth of productive activities and to the increase of services for citizens. This research contributes to the more general issue of the usefulness of space activities, not only for scientific or military-political purposes but also for economic and social development. Results show that, on the one hand, the membership with ESA has allowed smaller Countries to access space programs, to develop advanced technologies and to support the growth of their firms in some significant markets, but, on the other hand, the membership has also limited the access to space to few companies, without encouraging the broad dissemination of technological knowledge.

  15. High-Speed Rail Train Timetabling Problem: A Time-Space Network Based Method with an Improved Branch-and-Price Algorithm

    Bisheng He

    2014-01-01

    Full Text Available A time-space network based optimization method is designed for high-speed rail train timetabling problem to improve the service level of the high-speed rail. The general time-space path cost is presented which considers both the train travel time and the high-speed rail operation requirements: (1 service frequency requirement; (2 stopping plan adjustment; and (3 priority of train types. Train timetabling problem based on time-space path aims to minimize the total general time-space path cost of all trains. An improved branch-and-price algorithm is applied to solve the large scale integer programming problem. When dealing with the algorithm, a rapid branching and node selection for branch-and-price tree and a heuristic train time-space path generation for column generation are adopted to speed up the algorithm computation time. The computational results of a set of experiments on China’s high-speed rail system are presented with the discussions about the model validation, the effectiveness of the general time-space path cost, and the improved branch-and-price algorithm.

  16. Validation of case-finding algorithms derived from administrative data for identifying adults living with human immunodeficiency virus infection.

    Tony Antoniou

    Full Text Available OBJECTIVE: We sought to validate a case-finding algorithm for human immunodeficiency virus (HIV infection using administrative health databases in Ontario, Canada. METHODS: We constructed 48 case-finding algorithms using combinations of physician billing claims, hospital and emergency room separations and prescription drug claims. We determined the test characteristics of each algorithm over various time frames for identifying HIV infection, using data abstracted from the charts of 2,040 randomly selected patients receiving care at two medical practices in Toronto, Ontario as the reference standard. RESULTS: With the exception of algorithms using only a single physician claim, the specificity of all algorithms exceeded 99%. An algorithm consisting of three physician claims over a three year period had a sensitivity and specificity of 96.2% (95% CI 95.2%-97.9% and 99.6% (95% CI 99.1%-99.8%, respectively. Application of the algorithm to the province of Ontario identified 12,179 HIV-infected patients in care for the period spanning April 1, 2007 to March 31, 2009. CONCLUSIONS: Case-finding algorithms generated from administrative data can accurately identify adults living with HIV. A relatively simple "3 claims in 3 years" definition can be used for assembling a population-based cohort and facilitating future research examining trends in health service use and outcomes among HIV-infected adults in Ontario.

  17. The G′G-expansion method using modified Riemann–Liouville derivative for some space-time fractional differential equations

    Ahmet Bekir

    2014-09-01

    Full Text Available In this paper, the fractional partial differential equations are defined by modified Riemann–Liouville fractional derivative. With the help of fractional derivative and traveling wave transformation, these equations can be converted into the nonlinear nonfractional ordinary differential equations. Then G′G-expansion method is applied to obtain exact solutions of the space-time fractional Burgers equation, the space-time fractional KdV-Burgers equation and the space-time fractional coupled Burgers’ equations. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. These results reveal that the proposed method is very effective and simple in performing a solution to the fractional partial differential equation.

  18. Spatial Rotation of the Fractional Derivative in Two-Dimensional Space

    Ehab Malkawi

    2015-01-01

    Full Text Available The transformations of the partial fractional derivatives under spatial rotation in R2 are derived for the Riemann-Liouville and Caputo definitions. These transformation properties link the observation of physical quantities, expressed through fractional derivatives, with respect to different coordinate systems (observers. It is the hope that such understanding could shed light on the physical interpretation of fractional derivatives. Also it is necessary to be able to construct interaction terms that are invariant with respect to equivalent observers.

  19. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  20. A Multi-Band Analytical Algorithm for Deriving Absorption and Backscattering Coefficients from Remote-Sensing Reflectance of Optically Deep Waters

    Lee, Zhong-Ping; Carder, Kendall L.

    2001-01-01

    A multi-band analytical (MBA) algorithm is developed to retrieve absorption and backscattering coefficients for optically deep waters, which can be applied to data from past and current satellite sensors, as well as data from hyperspectral sensors. This MBA algorithm applies a remote-sensing reflectance model derived from the Radiative Transfer Equation, and values of absorption and backscattering coefficients are analytically calculated from values of remote-sensing reflectance. There are only limited empirical relationships involved in the algorithm, which implies that this MBA algorithm could be applied to a wide dynamic range of waters. Applying the algorithm to a simulated non-"Case 1" data set, which has no relation to the development of the algorithm, the percentage error for the total absorption coefficient at 440 nm a (sub 440) is approximately 12% for a range of 0.012 - 2.1 per meter (approximately 6% for a (sub 440) less than approximately 0.3 per meter), while a traditional band-ratio approach returns a percentage error of approximately 30%. Applying it to a field data set ranging from 0.025 to 2.0 per meter, the result for a (sub 440) is very close to that using a full spectrum optimization technique (9.6% difference). Compared to the optimization approach, the MBA algorithm cuts the computation time dramatically with only a small sacrifice in accuracy, making it suitable for processing large data sets such as satellite images. Significant improvements over empirical algorithms have also been achieved in retrieving the optical properties of optically deep waters.

  1. Mapping the Conformation Space of Wildtype and Mutant H-Ras with a Memetic, Cellular, and Multiscale Evolutionary Algorithm.

    Rudy Clausen

    2015-09-01

    Full Text Available An important goal in molecular biology is to understand functional changes upon single-point mutations in proteins. Doing so through a detailed characterization of structure spaces and underlying energy landscapes is desirable but continues to challenge methods based on Molecular Dynamics. In this paper we propose a novel algorithm, SIfTER, which is based instead on stochastic optimization to circumvent the computational challenge of exploring the breadth of a protein's structure space. SIfTER is a data-driven evolutionary algorithm, leveraging experimentally-available structures of wildtype and variant sequences of a protein to define a reduced search space from where to efficiently draw samples corresponding to novel structures not directly observed in the wet laboratory. The main advantage of SIfTER is its ability to rapidly generate conformational ensembles, thus allowing mapping and juxtaposing landscapes of variant sequences and relating observed differences to functional changes. We apply SIfTER to variant sequences of the H-Ras catalytic domain, due to the prominent role of the Ras protein in signaling pathways that control cell proliferation, its well-studied conformational switching, and abundance of documented mutations in several human tumors. Many Ras mutations are oncogenic, but detailed energy landscapes have not been reported until now. Analysis of SIfTER-computed energy landscapes for the wildtype and two oncogenic variants, G12V and Q61L, suggests that these mutations cause constitutive activation through two different mechanisms. G12V directly affects binding specificity while leaving the energy landscape largely unchanged, whereas Q61L has pronounced, starker effects on the landscape. An implementation of SIfTER is made available at http://www.cs.gmu.edu/~ashehu/?q=OurTools. We believe SIfTER is useful to the community to answer the question of how sequence mutations affect the function of a protein, when there is an

  2. Verification of the coupled space-angle adaptivity algorithm for the finite element-spherical harmonics method via the method of manufactured solutions

    Park, H.; De Oliveira, C. R. E.

    2007-01-01

    This paper describes the verification of the recently developed space-angle self-adaptive algorithm for the finite element-spherical harmonics method via the Method of Manufactured Solutions. This method provides a simple, yet robust way for verifying the theoretical properties of the adaptive algorithm and interfaces very well with the underlying second-order, even-parity transport formulation. Simple analytic solutions in both spatial and angular variables are manufactured to assess the theoretical performance of the a posteriori error estimates. The numerical results confirm reliability of the developed space-angle error indicators. (authors)

  3. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  4. Optimum Design of Braced Steel Space Frames including Soil-Structure Interaction via Teaching-Learning-Based Optimization and Harmony Search Algorithms

    Ayse T. Daloglu; Musa Artar; Korhan Ozgan; Ali İ. Karakas

    2018-01-01

    Optimum design of braced steel space frames including soil-structure interaction is studied by using harmony search (HS) and teaching-learning-based optimization (TLBO) algorithms. A three-parameter elastic foundation model is used to incorporate the soil-structure interaction effect. A 10-storey braced steel space frame example taken from literature is investigated according to four different bracing types for the cases with/without soil-structure interaction. X, V, Z, and eccentric V-shaped...

  5. Prognostic value of MRI-derived masticator space involvement in IMRT-treated nasopharyngeal carcinoma patients

    Xiao, Youping; Pan, Jianji; Chen, Yunbin; Lin, Shaojun; Chen, Ying; Zong, Jingfeng; Fang, Yanhong; Guo, Qiaojuan; Chen, Bijuan; Tang, Linbo

    2015-01-01

    This retrospective study reassessed nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiation therapy (IMRT), to determine the significance how magnetic resonance imaging (MRI)-derived masticator space involvement (MSI) affected patients’ prognosis. One thousand one hundred ninety seven NPC patients who had complete set of MRI and medical records were enrolled. Basing on their MRI findings, the T-categories of tumors were identified according to the seventh edition of American Joint Committee on Cancer staging system, which considers MSI a prognostic indicator for NPCs. Rates of overall survival (OS), local relapse-free survival (LRFS), regional relapse-free survival (RRFS) and distant metastasis-free survival (DMFS) were analyzed by the Kaplan-Meier method, and the Log-Rank test compared their differences. Cox regression analysis was employed to evaluate various prognostic factors systematically. Statistical analyses were conducted with SPSS 18.0 software, P value < 0.05 was considered statistically significant. Medial pterygoid muscle (MPM) was involved in 283 (23.64 %) cases, of which lateral pterygoid muscle (LPM) was concurrently affected in 181 (15.12 %) and infratemporal fossa (ITF) in 19 (1.59 %). Generally, MSI correlated with an OS, LRFS, and DMFS consistent with a T4-stage diagnosis (P > 0.05). Although different degrees of MSI presented a similar OS and DMFS (P > 0.1), tumors involving LPM had a relatively poorer LRFS than those affected the MPM only (P = 0.027), even for subgroup of patients composed of T3 and T4 classifications (P = 0.035). A tumor involving MPM brought an LRFS consistent with a T2 or T3-stage disease (P > 0.1). If the tumor affected LPM or ITF concurrently, the survival outcomes were more consistent with a T4-stage disease (P > 0.1). Nevertheless, compared to tumor infiltrating MPM, those invading LPM or ITF more frequently spread into other concurrent sites that earned higher T-staging categories. Moreover

  6. Isoprene emission potentials from European oak forests derived from canopy flux measurements: an assessment of uncertainties and inter-algorithm variability

    B. Langford

    2017-12-01

    Full Text Available Biogenic emission algorithms predict that oak forests account for ∼ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5–8 and 4–5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN, we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500

  7. Isoprene emission potentials from European oak forests derived from canopy flux measurements: an assessment of uncertainties and inter-algorithm variability

    Langford, Ben; Cash, James; Acton, W. Joe F.; Valach, Amy C.; Hewitt, C. Nicholas; Fares, Silvano; Goded, Ignacio; Gruening, Carsten; House, Emily; Kalogridis, Athina-Cerise; Gros, Valérie; Schafers, Richard; Thomas, Rick; Broadmeadow, Mark; Nemitz, Eiko

    2017-12-01

    Biogenic emission algorithms predict that oak forests account for ˜ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs) that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5-8 and 4-5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE) model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN), we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500 µg m-2 h-1); Bosco Fontana, Italy (1610

  8. Schwarzian导数,几何条件和Qκ空间%Schwarzian derivative, geometric conditions and Qκ spaces

    周继振

    2012-01-01

    设Ψ:D→Ω是一个单叶函数,利用Schwarzian导数,本文获得了logΨ'属于Qκ空间的一个充要条件.此外,本文运用了一个几何条件来刻画Qκ空间.%For a univalent function Ψ : D → Ω, we study the membership of logψ' to the space QK in terms of the Schwarzian derivative. We also apply a geometric condition to characterize the space QK.

  9. Siberia snow depth climatology derived from SSM/I data using a combined dynamic and static algorithm

    Grippa, M.; Mognard, N.; Le, Toan T.; Josberger, E.G.

    2004-01-01

    One of the major challenges in determining snow depth (SD) from passive microwave measurements is to take into account the spatiotemporal variations of the snow grain size. Static algorithms based on a constant snow grain size cannot provide accurate estimates of snow pack thickness, particularly over large regions where the snow pack is subjected to big spatial temperature variations. A recent dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from the Special Sensor Microwave/Imager (SSM/I) over the Northern Great Plains (NGP) in the US. In this paper, we develop a combined dynamic and static algorithm to estimate snow depth from 13 years of SSM/I observations over Central Siberia. This region is characterised by extremely cold surface air temperatures and by the presence of permafrost that significantly affects the ground temperature. The dynamic algorithm is implemented to take into account these effects and it yields accurate snow depths early in the winter, when thin snowpacks combine with cold air temperatures to generate rapid crystal growth. However, it is not applicable later in the winter when the grain size growth slows. Combining the dynamic algorithm to a static algorithm, with a temporally constant but spatially varying coefficient, we obtain reasonable snow depth estimates throughout the entire snow season. Validation is carried out by comparing the satellite snow depth monthly averages to monthly climatological data. We show that the location of the snow depth maxima and minima is improved when applying the combined algorithm, since its dynamic portion explicitly incorporate the thermal gradient through the snowpack. The results obtained are presented and evaluated for five different vegetation zones of Central Siberia. Comparison with in situ measurements is also shown and discussed. ?? 2004 Elsevier Inc. All rights reserved.

  10. Space microgravity drives transdifferentiation of human bone marrow-derived mesenchymal stem cells from osteogenesis to adipogenesis.

    Zhang, Cui; Li, Liang; Jiang, Yuanda; Wang, Cuicui; Geng, Baoming; Wang, Yanqiu; Chen, Jianling; Liu, Fei; Qiu, Peng; Zhai, Guangjie; Chen, Ping; Quan, Renfu; Wang, Jinfu

    2018-03-13

    Bone formation is linked with osteogenic differentiation of mesenchymal stem cells (MSCs) in the bone marrow. Microgravity in spaceflight is known to reduce bone formation. In this study, we used a real microgravity environment of the SJ-10 Recoverable Scientific Satellite to examine the effects of space microgravity on the osteogenic differentiation of human bone marrow-derived mesenchymal stem cells (hMSCs). hMSCs were induced toward osteogenic differentiation for 2 and 7 d in a cell culture device mounted on the SJ-10 Satellite. The satellite returned to Earth after going through space experiments in orbit for 12 d, and cell samples were harvested and analyzed for differentiation potentials. The results showed that space microgravity inhibited osteogenic differentiation and resulted in adipogenic differentiation, even under osteogenic induction conditions. Under space microgravity, the expression of 10 genes specific for osteogenesis decreased, including collagen family members, alkaline phosphatase ( ALP), and runt-related transcription factor 2 ( RUNX2), whereas the expression of 4 genes specific for adipogenesis increased, including adipsin ( CFD), leptin ( LEP), CCAAT/enhancer binding protein β ( CEBPB), and peroxisome proliferator-activated receptor-γ ( PPARG). In the analysis of signaling pathways specific for osteogenesis, we found that the expression and activity of RUNX2 was inhibited, expression of bone morphogenetic protein-2 ( BMP2) and activity of SMAD1/5/9 were decreased, and activity of focal adhesion kinase (FAK) and ERK-1/2 declined significantly under space microgravity. These data indicate that space microgravity plays a dual role by decreasing RUNX2 expression and activity through the BMP2/SMAD and integrin/FAK/ERK pathways. In addition, we found that space microgravity increased p38 MAPK and protein kinase B (AKT) activities, which are important for the promotion of adipogenic differentiation of hMSCs. Space microgravity significantly

  11. Citizen-Centric Urban Planning through Extracting Emotion Information from Twitter in an Interdisciplinary Space-Time-Linguistics Algorithm

    Bernd Resch

    2016-07-01

    Full Text Available Traditional urban planning processes typically happen in offices and behind desks. Modern types of civic participation can enhance those processes by acquiring citizens’ ideas and feedback in participatory sensing approaches like “People as Sensors”. As such, citizen-centric planning can be achieved by analysing Volunteered Geographic Information (VGI data such as Twitter tweets and posts from other social media channels. These user-generated data comprise several information dimensions, such as spatial and temporal information, and textual content. However, in previous research, these dimensions were generally examined separately in single-disciplinary approaches, which does not allow for holistic conclusions in urban planning. This paper introduces TwEmLab, an interdisciplinary approach towards extracting citizens’ emotions in different locations within a city. More concretely, we analyse tweets in three dimensions (space, time, and linguistics, based on similarities between each pair of tweets as defined by a specific set of functional relationships in each dimension. We use a graph-based semi-supervised learning algorithm to classify the data into discrete emotions (happiness, sadness, fear, anger/disgust, none. Our proposed solution allows tweets to be classified into emotion classes in a multi-parametric approach. Additionally, we created a manually annotated gold standard that can be used to evaluate TwEmLab’s performance. Our experimental results show that we are able to identify tweets carrying emotions and that our approach bears extensive potential to reveal new insights into citizens’ perceptions of the city.

  12. On isochronous derivatives of the first and second order in space dynamics tasks

    Bakshiyan, B. T.; Sukhanov, A. A.

    1979-01-01

    The first and second isochronous derivatives are calculated from the vector of state of dynamic system using its initial value. Use is made of the method of finding a fundamental solution of conjugate variational equations. This solution and the corresponding universal relationship for isochronous derivatives are found for the two-body problem in a form which is simple and suitable for computer programming. The form of these relationships was obtained for motion which differs from parabolic motion. Formulas are given for isochronous derivatives using the gravitational parameter in the two-body problem.

  13. Deriving causes of child mortality by re–analyzing national verbal autopsy data applying a standardized computer algorithm in Uganda, Rwanda and Ghana

    Li Liu

    2015-06-01

    Full Text Available Background To accelerate progress toward the Millennium Development Goal 4, reliable information on causes of child mortality is critical. With more national verbal autopsy (VA studies becoming available, how to improve consistency of national VA derived child causes of death should be considered for the purpose of global comparison. We aimed to adapt a standardized computer algorithm to re–analyze national child VA studies conducted in Uganda, Rwanda and Ghana recently, and compare our results with those derived from physician review to explore issues surrounding the application of the standardized algorithm in place of physician review. Methods and Findings We adapted the standardized computer algorithm considering the disease profile in Uganda, Rwanda and Ghana. We then derived cause–specific mortality fractions applying the adapted algorithm and compared the results with those ascertained by physician review by examining the individual– and population–level agreement. Our results showed that the leading causes of child mortality in Uganda, Rwanda and Ghana were pneumonia (16.5–21.1% and malaria (16.8–25.6% among children below five years and intrapartum–related complications (6.4–10.7% and preterm birth complications (4.5–6.3% among neonates. The individual level agreement was poor to substantial across causes (kappa statistics: –0.03 to 0.83, with moderate to substantial agreement observed for injury, congenital malformation, preterm birth complications, malaria and measles. At the population level, despite fairly different cause–specific mortality fractions, the ranking of the leading causes was largely similar. Conclusions The standardized computer algorithm produced internally consistent distribution of causes of child mortality. The results were also qualitatively comparable to those based on physician review from the perspective of public health policy. The standardized computer algorithm has the advantage of

  14. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  15. Derivation and Validation of a Biomarker-Based Clinical Algorithm to Rule Out Sepsis From Noninfectious Systemic Inflammatory Response Syndrome at Emergency Department Admission: A Multicenter Prospective Study.

    Mearelli, Filippo; Fiotti, Nicola; Giansante, Carlo; Casarsa, Chiara; Orso, Daniele; De Helmersen, Marco; Altamura, Nicola; Ruscio, Maurizio; Castello, Luigi Mario; Colonetti, Efrem; Marino, Rossella; Barbati, Giulia; Bregnocchi, Andrea; Ronco, Claudio; Lupia, Enrico; Montrucchio, Giuseppe; Muiesan, Maria Lorenza; Di Somma, Salvatore; Avanzi, Gian Carlo; Biolo, Gianni

    2018-05-07

    To derive and validate a predictive algorithm integrating a nomogram-based prediction of the pretest probability of infection with a panel of serum biomarkers, which could robustly differentiate sepsis/septic shock from noninfectious systemic inflammatory response syndrome. Multicenter prospective study. At emergency department admission in five University hospitals. Nine-hundred forty-seven adults in inception cohort and 185 adults in validation cohort. None. A nomogram, including age, Sequential Organ Failure Assessment score, recent antimicrobial therapy, hyperthermia, leukocytosis, and high C-reactive protein values, was built in order to take data from 716 infected patients and 120 patients with noninfectious systemic inflammatory response syndrome to predict pretest probability of infection. Then, the best combination of procalcitonin, soluble phospholypase A2 group IIA, presepsin, soluble interleukin-2 receptor α, and soluble triggering receptor expressed on myeloid cell-1 was applied in order to categorize patients as "likely" or "unlikely" to be infected. The predictive algorithm required only procalcitonin backed up with soluble phospholypase A2 group IIA determined in 29% of the patients to rule out sepsis/septic shock with a negative predictive value of 93%. In a validation cohort of 158 patients, predictive algorithm reached 100% of negative predictive value requiring biomarker measurements in 18% of the population. We have developed and validated a high-performing, reproducible, and parsimonious algorithm to assist emergency department physicians in distinguishing sepsis/septic shock from noninfectious systemic inflammatory response syndrome.

  16. A simple derivation and analysis of a helical cone beam tomographic algorithm for long object imaging via a novel definition of region of interest

    Hu Jicun; Tam, Kwok; Johnson, Roger H

    2004-01-01

    We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom

  17. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  18. An Unsupervised Algorithm for Change Detection in Hyperspectral Remote Sensing Data Using Synthetically Fused Images and Derivative Spectral Profiles

    Youkyung Han

    2017-01-01

    Full Text Available Multitemporal hyperspectral remote sensing data have the potential to detect altered areas on the earth’s surface. However, dissimilar radiometric and geometric properties between the multitemporal data due to the acquisition time or position of the sensors should be resolved to enable hyperspectral imagery for detecting changes in natural and human-impacted areas. In addition, data noise in the hyperspectral imagery spectrum decreases the change-detection accuracy when general change-detection algorithms are applied to hyperspectral images. To address these problems, we present an unsupervised change-detection algorithm based on statistical analyses of spectral profiles; the profiles are generated from a synthetic image fusion method for multitemporal hyperspectral images. This method aims to minimize the noise between the spectra corresponding to the locations of identical positions by increasing the change-detection rate and decreasing the false-alarm rate without reducing the dimensionality of the original hyperspectral data. Using a quantitative comparison of an actual dataset acquired by airborne hyperspectral sensors, we demonstrate that the proposed method provides superb change-detection results relative to the state-of-the-art unsupervised change-detection algorithms.

  19. Monitoring carbon dioxide from space: Retrieval algorithm and flux inversion based on GOSAT data and using CarbonTracker-China

    Yang, Dongxu; Zhang, Huifang; Liu, Yi; Chen, Baozhang; Cai, Zhaonan; Lü, Daren

    2017-08-01

    Monitoring atmospheric carbon dioxide (CO2) from space-borne state-of-the-art hyperspectral instruments can provide a high precision global dataset to improve carbon flux estimation and reduce the uncertainty of climate projection. Here, we introduce a carbon flux inversion system for estimating carbon flux with satellite measurements under the support of "The Strategic Priority Research Program of the Chinese Academy of Sciences—Climate Change: Carbon Budget and Relevant Issues". The carbon flux inversion system is composed of two separate parts: the Institute of Atmospheric Physics Carbon Dioxide Retrieval Algorithm for Satellite Remote Sensing (IAPCAS), and CarbonTracker-China (CT-China), developed at the Chinese Academy of Sciences. The Greenhouse gases Observing SATellite (GOSAT) measurements are used in the carbon flux inversion experiment. To improve the quality of the IAPCAS-GOSAT retrieval, we have developed a post-screening and bias correction method, resulting in 25%-30% of the data remaining after quality control. Based on these data, the seasonal variation of XCO2 (column-averaged CO2 dry-air mole fraction) is studied, and a strong relation with vegetation cover and population is identified. Then, the IAPCAS-GOSAT XCO2 product is used in carbon flux estimation by CT-China. The net ecosystem CO2 exchange is -0.34 Pg C yr-1 (±0.08 Pg C yr-1), with a large error reduction of 84%, which is a significant improvement on the error reduction when compared with in situ-only inversion.

  20. A new technique for observationally derived boundary conditions for space weather

    Pagano, Paolo; Mackay, Duncan Hendry; Yeates, Anthony Robinson

    2018-04-01

    Context. In recent years, space weather research has focused on developing modelling techniques to predict the arrival time and properties of coronal mass ejections (CMEs) at the Earth. The aim of this paper is to propose a new modelling technique suitable for the next generation of Space Weather predictive tools that is both efficient and accurate. The aim of the new approach is to provide interplanetary space weather forecasting models with accurate time dependent boundary conditions of erupting magnetic flux ropes in the upper solar corona. Methods: To produce boundary conditions, we couple two different modelling techniques, MHD simulations and a quasi-static non-potential evolution model. Both are applied on a spatial domain that covers the entire solar surface, although they extend over a different radial distance. The non-potential model uses a time series of observed synoptic magnetograms to drive the non-potential quasi-static evolution of the coronal magnetic field. This allows us to follow the formation and loss of equilibrium of magnetic flux ropes. Following this a MHD simulation captures the dynamic evolution of the erupting flux rope, when it is ejected into interplanetary space. Results.The present paper focuses on the MHD simulations that follow the ejection of magnetic flux ropes to 4 R⊙. We first propose a technique for specifying the pre-eruptive plasma properties in the corona. Next, time dependent MHD simulations describe the ejection of two magnetic flux ropes, that produce time dependent boundary conditions for the magnetic field and plasma at 4 R⊙ that in future may be applied to interplanetary space weather prediction models. Conclusions: In the present paper, we show that the dual use of quasi-static non-potential magnetic field simulations and full time dependent MHD simulations can produce realistic inhomogeneous boundary conditions for space weather forecasting tools. Before a fully operational model can be produced there are a

  1. NASA space communications R and D (Research and Development): Issues, derived benefits, and future directions

    1989-02-01

    Space communication is making immense strides since ECHO was launched in 1962. It was a simple passive reflector of signals that demonstrated the concept. Today, satellites incorporating transponders, sophisticated high-gain antennas, and stabilization systems provide voice, video, and data communications to millions of people nationally and worldwide. Applications of emerging technology, typified by NASA's Advanced Communications Technology Satellite (ACTS) to be launched in 1992, will use newer portions of the frequency spectrum (the Ka-band at 30/20 GHz), along with antennas and signal-processing that could open yet new markets and services. Government programs, directly or indirectly, are responsible for many space communications accomplishments. They are sponsored and funded in part by NASA and the U.S. Department of Defense since the early 1950s. The industry is growing rapidly and is achieving international preeminence under joint private and government sponsorship. Now, however, the U.S. space communications industry - satellite manufacturers and users, launch services providers, and communications services companies - are being forced to adapt to a different environment. International competition is growing, and terrestrial technologies such as fiber optics are claiming markets until recently dominated by satellites. At the same time, advancing technology is opening up opportunities for new applications and new markets in space exploration, for defense, and for commercial applications of several types. Space communications research, development, and applications (RD and A) programs need to adjust to these realities, be better coordinated and more efficient, and be more closely attuned to commercial markets. The programs must take advantage of RD and A results in other agencies - and in other nations.

  2. A Direct Algorithm for Pole Placement by State-derivative Feedback for Single-input Linear Systems

    Taha H. S. Abdelaziz

    2003-01-01

    Full Text Available This paper deals with the direct solution of the pole placement problem for single-input linear systems using state-derivative feedback. This pole placement problem is always solvable for any controllable systems if all eigenvalues of the original system are nonzero. Then any arbitrary closed-loop poles can be placed in order to achieve the desired system performance. The solving procedure results in a formula similar to the Ackermann formula. Its derivation is based on the transformation of a linear single-input system into Frobenius canonical form by a special coordinate transformation, then solving the pole placement problem by state derivative feedback. Finally the solution is extended also for single-input time-varying control systems. The simulation results are included to show the effectiveness of the proposed approach.

  3. Path Planning for Unmanned Underwater Vehicle in 3D Space with Obstacles Using Spline-Imperialist Competitive Algorithm and Optimal Interval Type-2 Fuzzy Logic Controller

    Ehsan Zakeri

    Full Text Available Abstract In this research, generation of a short and smooth path in three-dimensional space with obstacles for guiding an Unmanned Underwater Vehicle (UUV without collision is investigated. This is done by utilizing spline technique, in which the spline control points positions are determined by Imperialist Competitive Algorithm (ICA in three-dimensional space such that the shortest possible path from the starting point to the target point without colliding with obstacles is achieved. Furthermore, for guiding the UUV in the generated path, an Interval Type-2 Fuzzy Logic Controller (IT2FLC, the coefficients of which are optimized by considering an objective function that includes quadratic terms of the input forces and state error of the system, is used. Selecting such objective function reduces the control error and also the force applied to the UUV, which consequently leads to reduction of energy consumption. Therefore, by using a special method, desired signals of UUV state are obtained from generated three-dimensional optimal path such that tracking these signals by the controller leads to the tracking of this path by UUV. In this paper, the dynamical model of the UUV, entitled as "mUUV-WJ-1" , is derived and its hydrodynamic coefficients are calculated by CFD in order to be used in the simulations. For simulation by the method presented in this study, three environments with different obstacles are intended in order to check the performance of the IT2FLC controller in generating optimal paths for guiding the UUV. In this article, in addition to ICA, Particle Swarm Optimization (PSO and Artificial Bee Colony (ABC are also used for generation of the paths and the results are compared with each other. The results show the appropriate performance of ICA rather than ABC and PSO. Moreover, to evaluate the performance of the IT2FLC, optimal Type-1 Fuzzy Logic Controller (T1FLC and Proportional Integrator Differentiator (PID controller are designed

  4. MM Algorithms for Geometric and Signomial Programming.

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  5. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...

  6. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  7. Packet Classification by Multilevel Cutting of the Classification Space: An Algorithmic-Architectural Solution for IP Packet Classification in Next Generation Networks

    Motasem Aldiab

    2008-01-01

    Full Text Available Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

  8. Deriving Sight Distance on a Compound Sag and Circular Curve in a Three Dimensional Space

    Chiu Liu, PhD, PE, PTOE

    2012-09-01

    Full Text Available Insufficient roadway sight distance (SD may become a contribution factor to traffic collisions or other unsafe traffic maneuvers. The sight distance (SD for a two-dimensional (2-d sag or circular curve has been addressed in detail in various traffic engineering literatures. Although three-dimensional (3-d compound sag and circular curves are often found along ramps, connectors, and mountain roads, the sight distances for these compound curves are yet to be analyzed on an exact analytic setting. By considering human-vehicle-roadway interaction, the formulas for computing the SD on a 3-d curve are derived the first time on a unified analytic framework. The 2-d sag curve SD can also be deduced from these derived formulas as special limiting cases. Practitioners can easily program these formulas or equations on a user-friendly Microsoft Excel spread sheet to calculate 3-d SD on most roadways with roadside clearance. This framework can be extended to estimate SD on roadways with obstacles partially blocking vehicle headlight beams. 6.

  9. Analytical Solutions of a Space-Time Fractional Derivative of Groundwater Flow Equation

    Abdon Atangana

    2014-01-01

    Full Text Available The classical Darcy law is generalized by regarding the water flow as a function of a noninteger order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow. Two methods including Frobenius and Adomian decomposition method are used to obtain an asymptotic analytical solution to the generalized groundwater flow equation. The solution obtained via Frobenius method is valid in the vicinity of the borehole. This solution is in perfect agreement with the data observed from the pumping test performed by the institute for groundwater study on one of their boreholes settled on the test site of the University of the Free State. The test consisted of the pumping of the borehole at the constant discharge rate Q and monitoring the piezometric head for 350 minutes. Numerical solutions obtained via Adomian method are compared with the Barker generalized radial flow model for which a fractal dimension for the flow is assumed. Proposition for uncertainties in groundwater studies was given.

  10. Canonical path integral measures for Holst and Plebanski gravity: I. Reduced phase space derivation

    Engle, Jonathan; Han Muxin; Thiemann, Thomas

    2010-01-01

    An important aspect in defining a path integral quantum theory is the determination of the correct measure. For interacting theories and theories with constraints, this is non-trivial, and is normally not the heuristic 'Lebesgue measure' usually used. There have been many determinations of a measure for gravity in the literature, but none for the Palatini or Holst formulations of gravity. Furthermore, the relations between different resulting measures for different formulations of gravity are usually not discussed. In this paper we use the reduced phase technique in order to derive the path-integral measure for the Palatini and Holst formulation of gravity, which is different from the Lebesgue measure up to local measure factors which depend on the spacetime volume element and spatial volume element. From this path integral for the Holst formulation of general relativity we can also give a new derivation of the Plebanski path integral and discover a discrepancy with the result due to Buffenoir, Henneaux, Noui and Roche whose origin we resolve. This paper is the first in a series that aims at better understanding the relation between canonical loop quantum gravity and the spin-foam approach.

  11. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  12. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  13. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  14. Sol-gel derived C-SiC composites and protective coatings for sustained durability in the space environment

    Haruvy, Yair; Liedtke, Volker

    2003-09-01

    Composites and coatings were produced via the fast sol-gel process of a mixture of alkoxysilane precursors. The composites were comprised of carbon fibers, fabrics, or their precursors as reinforcement, and sol-gel-derived silicon carbide as matrix, aiming at high-temperature stable ceramics that can be utilized for re-entry structures. The protective coatings were comprised of fluorine-rich sol-gel derived resins, which exhibit high flexibility and coherence to provide sustained ATOX protection necessary for LEO space-exposed elements. For producing the composites, the sol-gel-derived resin is cast onto the reinforcement fibers/fabrics mat (carbon or its precursors) to produce a 'green' composite that is being cured. The 'green' composite is converted into a C-SiC composite via a gradual heat-pressure process under inert atmosphere, during which the organic substituents on the silicon atoms undergo internal oxidative pyrolysis via the schematic reaction: (SiRO3/2)n -> SiC + CO2 + H2O. The composition of the resultant silicon-oxi-carbide is tailorable via modifying the composition of the sol-gel reactants. The reinforcement, when made of carbon precursors, is converted into carbon during the heat-and-pressure processing as well. The C-SiC composites thus derived exhibit superior thermal stability and comparable thermal conductivity, combined with good mechanical strength features and failure resistance, which render them greatly applicable for re-entry shielding, heat-exchange pipes, and the like. Fluorine rich sol-gel derived coatings were developed as well, via the use of HF rich sol-gel process. These coatings provide oxidation-protection via the silica formation process, together with flexibility that allows 18,000 repetitive folding of the coating without cracking.

  15. Protecting Neural Structures and Cognitive Function During Prolonged Space Flight by Targeting the Brain Derived Neurotrophic Factor Molecular Network

    Schmidt, M. A.; Goodwin, T. J.

    2014-01-01

    Brain derived neurotrophic factor (BDNF) is the main activity-dependent neurotrophin in the human nervous system. BDNF is implicated in production of new neurons from dentate gyrus stem cells (hippocampal neurogenesis), synapse formation, sprouting of new axons, growth of new axons, sprouting of new dendrites, and neuron survival. Alterations in the amount or activity of BDNF can produce significant detrimental changes to cortical function and synaptic transmission in the human brain. This can result in glial and neuronal dysfunction, which may contribute to a range of clinical conditions, spanning a number of learning, behavioral, and neurological disorders. There is an extensive body of work surrounding the BDNF molecular network, including BDNF gene polymorphisms, methylated BDNF gene promoters, multiple gene transcripts, varied BDNF functional proteins, and different BDNF receptors (whose activation differentially drive the neuron to neurogenesis or apoptosis). BDNF is also closely linked to mitochondrial biogenesis through PGC-1alpha, which can influence brain and muscle metabolic efficiency. BDNF AS A HUMAN SPACE FLIGHT COUNTERMEASURE TARGET Earth-based studies reveal that BDNF is negatively impacted by many of the conditions encountered in the space environment, including oxidative stress, radiation, psychological stressors, sleep deprivation, and many others. A growing body of work suggests that the BDNF network is responsive to a range of diet, nutrition, exercise, drug, and other types of influences. This section explores the BDNF network in the context of 1) protecting the brain and nervous system in the space environment, 2) optimizing neurobehavioral performance in space, and 3) reducing the residual effects of space flight on the nervous system on return to Earth

  16. Refuse-derived fuel as a secondary energy in Taiwan - Using Hotelling space allocation model

    Kwodong Wey; Sheueching Hong

    2006-01-01

    Most Taiwanese farmers usually blend rice straws into the soil after harvest. However, rice straws possess great thermal energy, which can be used to produce refuse-derived fuel (RDF-5). As Taiwan faces energy shortage, the development benefits of the RDF-5 industry are tempting. This study employs the Hotelling model to design the locations and the optimal numbers of RDF-5 plants. From the technology and material supply point of view, turning waste rice straws into RDF-5 is feasible in Taiwan. Nevertheless, from the business operation perspective, even if the RDF-5 plant is willing to lower its profit rate, its selling price is still hard to compete with imported RDF-5. If the Taiwanese government decides to pursue RDF-5 as an alternative energy, they might need to take a step further to subsidize entrepreneurs or provide appropriate tax benefits. Otherwise, the RDF-5 industry is hard to survive in Taiwan. (Author)

  17. Validation of ozone profile retrievals derived from the OMPS LP version 2.5 algorithm against correlative satellite measurements

    Kramarova, Natalya A.; Bhartia, Pawan K.; Jaross, Glen; Moy, Leslie; Xu, Philippe; Chen, Zhong; DeLand, Matthew; Froidevaux, Lucien; Livesey, Nathaniel; Degenstein, Douglas; Bourassa, Adam; Walker, Kaley A.; Sheese, Patrick

    2018-05-01

    The Limb Profiler (LP) is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS). We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km) LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing vertical, spatial and temporal

  18. A space-jump derivation for non-local models of cell-cell adhesion and non-local chemotaxis.

    Buttenschön, Andreas; Hillen, Thomas; Gerisch, Alf; Painter, Kevin J

    2018-01-01

    Cellular adhesion provides one of the fundamental forms of biological interaction between cells and their surroundings, yet the continuum modelling of cellular adhesion has remained mathematically challenging. In 2006, Armstrong et al. proposed a mathematical model in the form of an integro-partial differential equation. Although successful in applications, a derivation from an underlying stochastic random walk has remained elusive. In this work we develop a framework by which non-local models can be derived from a space-jump process. We show how the notions of motility and a cell polarization vector can be naturally included. With this derivation we are able to include microscopic biological properties into the model. We show that particular choices yield the original Armstrong model, while others lead to more general models, including a doubly non-local adhesion model and non-local chemotaxis models. Finally, we use random walk simulations to confirm that the corresponding continuum model represents the mean field behaviour of the stochastic random walk.

  19. First space-based derivation of the global atmospheric methanol emission fluxes

    T. Stavrakou

    2011-05-01

    is unaccounted for in the MEGANv2.1 inventory. The most significant error reductions achieved by the optimization concern the derived biogenic emissions over the Amazon and over the Former Soviet Union. The robustness of the derived fluxes to changes in convective updraft fluxes, in methanol removal processes, and in the choice of the biogenic a priori inventory is assessed through sensitivity inversions. Detailed comparisons of the model with a number of aircraft and surface observations of methanol, as well as new methanol measurements in Europe and in the Reunion Island show that the satellite-derived methanol emissions improve significantly the agreement with the independent data, giving thus credence to the IASI dataset.

  20. A nonlinear QSAR study using oscillating search and SVM as an efficient algorithm to model the inhibition of reverse transcriptase by HEPT derivatives

    Ferkous, F.; Saihi, Y.

    2018-01-01

    Quantitative structure-activity relationships were constructed for 107 inhibitors of HIV-1 reverse transcriptase that are derivatives of 1-[(2-hydroxyethoxy)methyl]-6-(phenylthio)thymine (HEPT). A combination of a support vector machine (SVM) and oscillating search (OS) algorithms for feature selection was adopted to select the most appropriate descriptors. The application was optimized to obtain an SVM model to predict the biological activity EC50 of the HEPT derivatives with a minimum number of descriptors (SpMax4 B h (e) MLOGP MATS5m) and high values of R2 and Q2 (0.8662, 0.8769). The statistical results showed good correlation between the activity and three best descriptors were included in the best SVM model. The values of R2 and Q2 confirmed the stability and good predictive ability of the model. The SVM technique was adequate to produce an effective QSAR model and outperformed those in the literature and the predictive stages for the inhibitory activity of reverse transcriptase by HEPT derivatives. (author)

  1. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  2. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  3. Space-Derived Phenology, Retrieval and Use for Drought and Food Security Monitoring

    Meroni, M.; Kayitakire, F.; Rembold, F.; Urbano, F.; Schucknecht, A.; LEO, O.

    2014-12-01

    Monitoring vegetation conditions is a critical activity for assessing food security in Africa. Rural populations relying on rain-fed agriculture and livestock grazing are highly exposed to large seasonal and inter-annual fluctuations in water availability. Monitoring the state, evolution, and productivity of vegetation, crops and pastures in particular, is important to conduct food emergency responses and plan for a long-term, resilient, development strategy in this area. The timing of onset, the duration, and the intensity of vegetation growth can be retrieved from space observations and used for food security monitoring to assess seasonal vegetation development and forecast the likely seasonal outcome when the season is ongoing. In this contribution we present a set of phenology-based remote sensing studies in support to food security analysis. Key phenological indicators are retrieved using a model-fit approach applied to SOPT-VEGETATION FAPAR time series. Remote-sensing phenology is first used to estimate i) the impact of the drought in the Horn of Africa, ii) crop yield in Tunisia and, iii) rangeland biomass production in Niger. Then the impact of the start and length of vegetation growing period on the total biomass production is assessed over the Sahel. Finally, a probabilistic approach using phenological information to forecast the occurrence of an end-of-season biomass production deficit is applied over the Sahel to map hot-spots of drought-related risk.

  4. Fractional Killing-Yano Tensors and Killing Vectors Using the Caputo Derivative in Some One- and Two-Dimensional Curved Space

    Ehab Malkawi

    2014-01-01

    Full Text Available The classical free Lagrangian admitting a constant of motion, in one- and two-dimensional space, is generalized using the Caputo derivative of fractional calculus. The corresponding metric is obtained and the fractional Christoffel symbols, Killing vectors, and Killing-Yano tensors are derived. Some exact solutions of these quantities are reported.

  5. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  6. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  7. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    Shieh, Chun-Chien [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006, Australia and Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia); Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Kuncic, Zdenka [Institute of Medical Physics, School of Physics, University of Sydney, NSW 2006 (Australia)

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  8. Imaging Formation Algorithm of the Ground and Space-Borne Hybrid BiSAR Based on Parameters Estimation from Direct Signal

    Qingjun Zhang

    2014-01-01

    Full Text Available This paper proposes a novel image formation algorithm for the bistatic synthetic aperture radar (BiSAR with the configuration of a noncooperative transmitter and a stationary receiver in which the traditional imaging algorithm failed because the necessary imaging parameters cannot be estimated from the limited information from the noncooperative data provider. In the new algorithm, the essential parameters for imaging, such as squint angle, Doppler centroid, and Doppler chirp-rate, will be estimated by full exploration of the recorded direct signal (direct signal is the echo from satellite to stationary receiver directly from the transmitter. The Doppler chirp-rate is retrieved by modeling the peak phase of direct signal as a quadratic polynomial. The Doppler centroid frequency and the squint angle can be derived from the image contrast optimization. Then the range focusing, the range cell migration correction (RCMC, and the azimuth focusing are implemented by secondary range compression (SRC and the range cell migration, respectively. At last, the proposed algorithm is validated by imaging of the BiSAR experiment configured with china YAOGAN 10 SAR as the transmitter and the receiver platform located on a building at a height of 109 m in Jiangsu province. The experiment image with geometric correction shows good accordance with local Google images.

  9. Genetic algorithm as a variable selection procedure for the simulation of 13C nuclear magnetic resonance spectra of flavonoid derivatives using multiple linear regression.

    Ghavami, Raoof; Najafi, Amir; Sajadi, Mohammad; Djannaty, Farhad

    2008-09-01

    In order to accurately simulate (13)C NMR spectra of hydroxy, polyhydroxy and methoxy substituted flavonoid a quantitative structure-property relationship (QSPR) model, relating atom-based calculated descriptors to (13)C NMR chemical shifts (ppm, TMS=0), is developed. A dataset consisting of 50 flavonoid derivatives was employed for the present analysis. A set of 417 topological, geometrical, and electronic descriptors representing various structural characteristics was calculated and separate multilinear QSPR models were developed between each carbon atom of flavonoid and the calculated descriptors. Genetic algorithm (GA) and multiple linear regression analysis (MLRA) were used to select the descriptors and to generate the correlation models. Analysis of the results revealed a correlation coefficient and root mean square error (RMSE) of 0.994 and 2.53ppm, respectively, for the prediction set.

  10. Spectral Decomposition Algorithm (SDA)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  11. Interactive genetic algorithm for user-centered design of distributed conservation practices in a watershed: An examination of user preferences in objective space and user behavior

    Piemonti, Adriana Debora; Babbar-Sebens, Meghna; Mukhopadhyay, Snehasis; Kleinberg, Austin

    2017-05-01

    Interactive Genetic Algorithms (IGA) are advanced human-in-the-loop optimization methods that enable humans to give feedback, based on their subjective and unquantified preferences and knowledge, during the algorithm's search process. While these methods are gaining popularity in multiple fields, there is a critical lack of data and analyses on (a) the nature of interactions of different humans with interfaces of decision support systems (DSS) that employ IGA in water resources planning problems and on (b) the effect of human feedback on the algorithm's ability to search for design alternatives desirable to end-users. In this paper, we present results and analyses of observational experiments in which different human participants (surrogates and stakeholders) interacted with an IGA-based, watershed DSS called WRESTORE to identify plans of conservation practices in a watershed. The main goal of this paper is to evaluate how the IGA adapts its search process in the objective space to a user's feedback, and identify whether any similarities exist in the objective space of plans found by different participants. Some participants focused on the entire watershed, while others focused only on specific local subbasins. Additionally, two different hydrology models were used to identify any potential differences in interactive search outcomes that could arise from differences in the numerical values of benefits displayed to participants. Results indicate that stakeholders, in comparison to their surrogates, were more likely to use multiple features of the DSS interface to collect information before giving feedback, and dissimilarities existed among participants in the objective space of design alternatives.

  12. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  13. High-efficiency space-based software radio architectures & algorithms (a minimum size, weight, and power TeraOps processor)

    Dunham, Mark Edward [Los Alamos National Laboratory; Baker, Zachary K [Los Alamos National Laboratory; Stettler, Matthew W [Los Alamos National Laboratory; Pigue, Michael J [Los Alamos National Laboratory; Schmierer, Eric N [Los Alamos National Laboratory; Power, John F [Los Alamos National Laboratory; Graham, Paul S [Los Alamos National Laboratory

    2009-01-01

    Los Alamos has recently completed the latest in a series of Reconfigurable Software Radios, which incorporates several key innovations in both hardware design and algorithms. Due to our focus on satellite applications, each design must extract the best size, weight, and power performance possible from the ensemble of Commodity Off-the-Shelf (COTS) parts available at the time of design. In this case we have achieved 1 TeraOps/second signal processing on a 1920 Megabit/second datastream, while using only 53 Watts mains power, 5.5 kg, and 3 liters. This processing capability enables very advanced algorithms such as our wideband RF compression scheme to operate remotely, allowing network bandwidth constrained applications to deliver previously unattainable performance.

  14. A Climatology of Tropospheric CO over the Central and Southeastern United States and the Southwestern Pacific Ocean Derived from Space, Air, and Ground-based Infrared Interferometer Spectra

    McMillian, W. Wallace; Strow, L. Larrabee; Revercomb, H.; Knuteson, R.; Thompson, A.

    2003-01-01

    This final report summarizes all research activities and publications undertaken as part of NASA Atmospheric Chemistry and Modeling Analysis Program (ACMAP) Grant NAG-1-2022, 'A Climatology of Tropospheric CO over the Central and Southeastern United States and the Southwestern Pacific Ocean Derived from Space, Air, and Ground-based Infrared Interferometer Spectra'. Major project accomplishments include: (1) analysis of more than 300,000 AERI spectra from the ARM SGP site yielding a 5-year (1998-2002) timeseries of CO retrievals from the Lamont, OK AERI; (2) development of a prototype CO profile retrieval algorithm for AERI spectra; (3) validation and publication of the first CO retrievals from the Scanning High-resolution Interferometer Sounder (SHIS); and (4) development of a prototype AERI tropospheric O3 retrieval algorithm. Compilation and publication of the 5-year Lamont, OK timeseries is underway including a new collaboration with scientists at the Lawrence Berkeley National Laboratory. Public access to this data will be provided upon article submission. A comprehensive CO analysis of the archive of HIS spectra of remains as the only originally proposed activity with little progress. The greatest challenge faced in this project was motivating the University of Wisconsin Co-Investigators to deliver their archived HIS and AERIOO data along with the requisite temperature and water vapor profiles in a timely manner. Part of the supplied HIS dataset from ASHOE may be analyzed as part of a Master s Thesis under a separate project. Our success with the SAFARI 2000 SHIS CO analysis demonstrates the utility of such aircraft remote sensing data given the proper support from the instrument investigators. In addition to the PI and Co-I s, personnel involved in this CO climatology project include one Post Doctoral Fellow, one Research Scientist, two graduate students, and two undergraduate students. A total of fifteen presentations regarding research related to this

  15. The Algorithmic Imaginary

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  16. The BR eigenvalue algorithm

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  17. Understanding urban green space as a health resource: a qualitative comparison of visit motivation and derived effects among park users in Sheffield, UK.

    Irvine, Katherine N; Warber, Sara L; Devine-Wright, Patrick; Gaston, Kevin J

    2013-01-22

    With increasing interest in the use of urban green space to promote human health, there is a need to understand the extent to which park users conceptualize these places as a resource for health and well-being. This study sought to examine park users' own reasons for and benefits from green space usage and compare these with concepts and constructs in existing person-environment-health theories and models of health. Conducted in 13 public green spaces in Sheffield, UK, we undertook a qualitative content analysis of 312 park users' responses to open-ended interview questions and identified a breadth, depth and salience of visit motivators and derived effects. Findings highlight a discrepancy between reasons for visiting and derived effects from the use of urban green space. Motivations emphasized walking, green space qualities, and children. Derived effects highlighted relaxation, positive emotions within the self and towards the place, and spiritual well-being. We generate a taxonomy of motivations and derived effects that could facilitate operationalization within empirical research and articulate a conceptual framework linking motivators to outcomes for investigating green space as a resource for human health and well-being.

  18. Understanding Urban Green Space as a Health Resource: A Qualitative Comparison of Visit Motivation and Derived Effects among Park Users in Sheffield, UK

    Kevin J. Gaston

    2013-01-01

    Full Text Available With increasing interest in the use of urban green space to promote human health, there is a need to understand the extent to which park users conceptualize these places as a resource for health and well-being. This study sought to examine park users’ own reasons for and benefits from green space usage and compare these with concepts and constructs in existing person-environment-health theories and models of health. Conducted in 13 public green spaces in Sheffield, UK, we undertook a qualitative content analysis of 312 park users’ responses to open-ended interview questions and identified a breadth, depth and salience of visit motivators and derived effects. Findings highlight a discrepancy between reasons for visiting and derived effects from the use of urban green space. Motivations emphasized walking, green space qualities, and children. Derived effects highlighted relaxation, positive emotions within the self and towards the place, and spiritual well-being. We generate a taxonomy of motivations and derived effects that could facilitate operationalization within empirical research and articulate a conceptual framework linking motivators to outcomes for investigating green space as a resource for human health and well-being.

  19. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    Aldoghaither, Abeer

    2015-12-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  20. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  1. The mathematics of some tomography algorithms used at JET

    Ingesson, L

    2000-03-01

    Mathematical details are given of various tomographic reconstruction algorithms that are in use at JET. These algorithms include constrained optimization (CO) with local basis functions, the Cormack method, methods with natural basis functions and the iterative projection-space reconstruction method. Topics discussed include: derivation of the matrix equation for constrained optimization, variable grid size, basis functions, line integrals, derivative matrices, smoothness matrices, analytical expression of the CO solution, sparse matrix storage, projection-space coordinates, the Cormack method in elliptical coordinates, interpolative generalized natural basis functions and some details of the implementation of the filtered backprojection method. (author)

  2. A space-time tomography algorithm for the five-camera soft X-ray diagnostic at RTP

    Lyadina, E.S.; Tanzi, C.P.; Cruz, D.F. da; Donne, A.J.H. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands)

    1993-12-31

    A five-camera soft x-ray with 80 detector channels has been installed on the RTP tokamak with the object of studying MHD processes with a relatively high poloidal mode number (m=4). Numerical tomographic reconstruction algorithms used to reconstruct the plasma emissivity profile are constrained by the characteristics of the system. Especially high poloidal harmonics, which can be resolved due to the high number of cameras, can be strongly distorted by stochastic and systematic errors. Furthermore, small uncertainties in the relative position of the cameras in a multiple camera system can lead to strong artefacts in the reconstruction. (author) 6 refs., 4 figs.

  3. Multilevel Balancing Domain Decomposition by Constraints Deluxe Algorithms with Adaptive Coarse Spaces for Flow in Porous Media

    Zampini, Stefano; Tu, Xuemin

    2017-01-01

    Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.

  4. Multilevel Balancing Domain Decomposition by Constraints Deluxe Algorithms with Adaptive Coarse Spaces for Flow in Porous Media

    Zampini, Stefano

    2017-08-03

    Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.

  5. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  6. Preliminary report on the rice blast resistance of space-induced mutants derived from rice cultivar 'Taihang-68'

    Zhang Jingxin; Sun Dayuan; Wang Hui; Liu Yongzhu; Guo Tao; Chen Zhiqiang; Yang Qiyun; Zhu Xiaoyuan

    2012-01-01

    To screen the blast resistance mutants, the resistance of SP 1 progenies derived from rice variety Taihang-68 were evaluated after satellite flight by representative blast isolate GD0193 which had a broad pathogenic spectra, and then primary genetic analysis of resistant mutants and mapping of resistance gene, as well as resistance spectra at seedling and neck blast resistance at maturity were performed. The results showed that space-mutation was effective method to change the blast resistance of Taihang-68. The screened resistant mutants TH1 and TH2 showed that resistance to isolate GD0193 no disjunction and separation respectively, and the resistance separation ratio of TH2 indicated that its resistance was controlled by one pair of major genes, which was preliminary mapped on the long arm of chromosome 11. In blast resistance spectra and neck blast resistance, TH1 and TH2 were both enhanced remarkable compared with the wild-type at seedling and maturity, and their resistance could be inherited, the blast resistance of these two mutants were also increased comparing with several main cultivars in South China. (authors)

  7. In yeast redistribution of Sod1 to the mitochondrial intermembrane space provides protection against respiration derived oxidative stress.

    Klöppel, Christine; Michels, Christine; Zimmer, Julia; Herrmann, Johannes M; Riemer, Jan

    2010-12-03

    The antioxidative enzyme copper-zinc superoxide dismutase (Sod1) is an important cellular defence system against reactive oxygen species (ROS). While the majority of this enzyme is localized to the cytosol, about 1% of the cellular Sod1 is present in the intermembrane space (IMS) of mitochondria. These amounts of mitochondrial Sod1 are increased for certain Sod1 mutants that are linked to the neurodegenerative disease amyotrophic lateral sclerosis (ALS). To date, only little is known about the physiological function of mitochondrial Sod1. Here, we use the model system Saccharomyces cerevisiae to generate cells in which Sod1 is exclusively localized to the IMS. We find that IMS-localized Sod1 can functionally substitute wild type Sod1 and that it even exceeds the protective capacity of wild type Sod1 under conditions of mitochondrial ROS stress. Moreover, we demonstrate that upon expression in yeast cells the common ALS-linked mutant Sod1(G93A) becomes enriched in the mitochondrial fraction and provides an increased protection of cells from mitochondrial oxidative stress. Such an effect cannot be observed for the catalytically inactive mutant Sod1(G85R). Our observations suggest that the targeting of Sod1 to the mitochondrial IMS provides an increased protection against respiration-derived ROS. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  9. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    Franz Konstantin Fuss

    2013-01-01

    Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  10. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  11. A Robust Algorithm of Multiquadric Method Based on an Improved Huber Loss Function for Interpolating Remote-Sensing-Derived Elevation Data Sets

    Chuanfa Chen

    2015-03-01

    Full Text Available Remote-sensing-derived elevation data sets often suffer from noise and outliers due to various reasons, such as the physical limitations of sensors, multiple reflectance, occlusions and low contrast of texture. Outliers generally have a seriously negative effect on DEM construction. Some interpolation methods like ordinary kriging (OK are capable of smoothing noise inherent in sample points, but are sensitive to outliers. In this paper, a robust algorithm of multiquadric method (MQ based on an Improved Huber loss function (MQ-IH has been developed to decrease the impact of outliers on DEM construction. Theoretically, the improved Huber loss function is null for outliers, quadratic for small errors, and linear for others. Simulated data sets drawn from a mathematical surface with different error distributions were employed to analyze the robustness of MQ-IH. Results indicate that MQ-IH obtains a good balance between efficiency and robustness. Namely, the performance of MQ-IH is comparative to those of the classical MQ and MQ based on the Classical Huber loss function (MQ-CH when sample points follow a normal distribution, and the former outperforms the latter two when sample points are subject to outliers. For example, for the Cauchy error distribution with the location parameter of 0 and scale parameter of 1, the root mean square errors (RMSEs of MQ-CH and the classical MQ are 0.3916 and 1.4591, respectively, whereas that of MQ-IH is 0.3698. The performance of MQ-IH is further evaluated by qualitative and quantitative analysis through a real-world example of DEM construction with the stereo-images-derived elevation points. Results demonstrate that compared with the classical interpolation methods, including natural neighbor (NN, OK and ANUDEM (a program that calculates regular grid digital elevation models (DEMs with sensible shape and drainage structure from arbitrarily large topographic data sets, and two versions of MQ, including the

  12. Optimum Design of Braced Steel Space Frames including Soil-Structure Interaction via Teaching-Learning-Based Optimization and Harmony Search Algorithms

    Ayse T. Daloglu

    2018-01-01

    Full Text Available Optimum design of braced steel space frames including soil-structure interaction is studied by using harmony search (HS and teaching-learning-based optimization (TLBO algorithms. A three-parameter elastic foundation model is used to incorporate the soil-structure interaction effect. A 10-storey braced steel space frame example taken from literature is investigated according to four different bracing types for the cases with/without soil-structure interaction. X, V, Z, and eccentric V-shaped bracing types are considered in the study. Optimum solutions of examples are carried out by a computer program coded in MATLAB interacting with SAP2000-OAPI for two-way data exchange. The stress constraints according to AISC-ASD (American Institute of Steel Construction-Allowable Stress Design, maximum lateral displacement constraints, interstorey drift constraints, and beam-to-column connection constraints are taken into consideration in the optimum design process. The parameters of the foundation model are calculated depending on soil surface displacements by using an iterative approach. The results obtained in the study show that bracing types and soil-structure interaction play very important roles in the optimum design of steel space frames. Finally, the techniques used in the optimum design seem to be quite suitable for practical applications.

  13. Algorithms for the mitigation of space weather threats at low latitudes, contributing to the extension of EGNOS over Africa

    Forte, B.; Da Dalt, F.; Panicciari, T.

    GNSS is already a technology that pervades modern lifestyles and over the last decade has become integral to many of our transport systems. One of the major barriers to the development of GNSS for safety-critical services such as aviation comes from the unknown threats from Space Weather.In order...... of the function of an SBAS system is to map 2the ionised regions of the upper atmosphere (the ionosphere) to enable specialist aviation GNSS receivers to make corrections for the ionospheric delay and hence to achieve a more accurate position. This is very important but more critical still is the capability...

  14. A parallel implementation of the Wuchty algorithm with additional experimental filters to more thoroughly explore RNA conformational space.

    Jonathan W Stone

    Full Text Available We present new modifications to the Wuchty algorithm in order to better define and explore possible conformations for an RNA sequence. The new features, including parallelization, energy-independent lonely pair constraints, context-dependent chemical probing constraints, helix filters, and optional multibranch loops, provide useful tools for exploring the landscape of RNA folding. Chemical probing alone may not necessarily define a single unique structure. The helix filters and optional multibranch loops are global constraints on RNA structure that are an especially useful tool for generating models of encapsidated viral RNA for which cryoelectron microscopy or crystallography data may be available. The computations generate a combinatorially complete set of structures near a free energy minimum and thus provide data on the density and diversity of structures near the bottom of a folding funnel for an RNA sequence. The conformational landscapes for some RNA sequences may resemble a low, wide basin rather than a steep funnel that converges to a single structure.

  15. A kernel adaptive algorithm for quaternion-valued inputs.

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  16. An incomplete assembly with thresholding algorithm for systems of reaction-diffusion equations in three space dimensions IAT for reaction-diffusion systems

    Moore, Peter K.

    2003-01-01

    Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied

  17. Analysis and Speed Ripple Mitigation of a Space Vector Pulse Width Modulation-Based Permanent Magnet Synchronous Motor with a Particle Swarm Optimization Algorithm

    Xing Liu

    2016-11-01

    Full Text Available A method is proposed for reducing speed ripple of permanent magnet synchronous motors (PMSMs controlled by space vector pulse width modulation (SVPWM. A flux graph and mathematics are used to analyze the speed ripple characteristics of the PMSM. Analysis indicates that the 6P (P refers to pole pairs of the PMSM time harmonic of rotor mechanical speed is the main harmonic component in the SVPWM control PMSM system. To reduce PMSM speed ripple, harmonics are superposed on a SVPWM reference signal. A particle swarm optimization (PSO algorithm is proposed to determine the optimal phase and multiplier coefficient of the superposed harmonics. The results of a Fourier decomposition and an optimized simulation model verified the accuracy of the analysis as well as the effectiveness of the speed ripple reduction methods, respectively.

  18. Optimization of the p-xylene oxidation process by a multi-objective differential evolution algorithm with adaptive parameters co-derived with the population-based incremental learning algorithm

    Guo, Zhan; Yan, Xuefeng

    2018-04-01

    Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.

  19. 基于FrFT-MUSIC的空时频算法%Space-time-frequency Algorithm Based on FrFT-MUSIC

    张声杰; 刘梅

    2011-01-01

    针对传统时频分析方法难以对星载分布式合成孔径雷达(SAR)系统中地面目标速度进行准确估计的问题,提出了分数阶傅里叶变换(FrFT)-多重信号分类(MUSIC)空时频方法.利用FrFT获得信号向量并构造空时频分布矩阵;应用该分布矩阵替代传统MUSIC算法中的协方差矩阵,从而求得FrFT- MUSIC空时频谱,通过对谱函数进行搜索估计目标速度.仿真实验表明,该方法的目标速度估计精度高,抗噪声能力强.%The fractional Fourier transform (FrFT)-multipIe signal classification (MUSIC) space-time-frequency algorithm is proposed to solve the problem that it is difficult for the traditional time-frequency analysis method to accurately estimate the ground targets' azimuth speed which is coupled with spaceborne distributed synthetic aperture radar" (SAR). The signal vector is obtained and the space-time-frequency distribution matrix is formed by FrFT. Then, this distribution matrix is used instead of the traditional MUSIC covariance matrix to construct FrFT- MUSIC spectral function. Target speed is estimated by searching. Simulation results show that this algorithm has a high accuracy of parameter estimation and strong resistance to noise.

  20. The Quality Control Algorithms Used in the Creation of NASA Kennedy Space Center Lightning Protection System Towers Meteorological Database

    Orcutt, John M.; Brenton, James C.

    2016-01-01

    An accurate database of meteorological data is essential for designing any aerospace vehicle and for preparing launch commit criteria. Meteorological instrumentation were recently placed on the three Lightning Protection System (LPS) towers at Kennedy Space Center (KSC) launch complex 39B (LC-39B), which provide a unique meteorological dataset existing at the launch complex over an extensive altitude range. Data records of temperature, dew point, relative humidity, wind speed, and wind direction are produced at 40, 78, 116, and 139 m at each tower. The Marshall Space Flight Center Natural Environments Branch (EV44) received an archive that consists of one-minute averaged measurements for the period of record of January 2011 - April 2015. However, before the received database could be used EV44 needed to remove any erroneous data from within the database through a comprehensive quality control (QC) process. The QC process applied to the LPS towers' meteorological data is similar to other QC processes developed by EV44, which were used in the creation of meteorological databases for other towers at KSC. The QC process utilized in this study has been modified specifically for use with the LPS tower database. The QC process first includes a check of each individual sensor. This check includes removing any unrealistic data and checking the temporal consistency of each variable. Next, data from all three sensors at each height are checked against each other, checked against climatology, and checked for sensors that erroneously report a constant value. Then, a vertical consistency check of each variable at each tower is completed. Last, the upwind sensor at each level is selected to minimize the influence of the towers and other structures at LC-39B on the measurements. The selection process for the upwind sensor implemented a study of tower-induced turbulence. This paper describes in detail the QC process, QC results, and the attributes of the LPS towers meteorological

  1. Coupling Algorithms for Calculating Sensitivities of Population Balances

    Man, P. L. W.; Kraft, M.; Norris, J. R.

    2008-01-01

    We introduce a new class of stochastic algorithms for calculating parametric derivatives of the solution of the space-homogeneous Smoluchowski's coagulation equation. Currently, it is very difficult to produce low variance estimates of these derivatives in reasonable amounts of computational time through the use of stochastic methods. These new algorithms consider a central difference estimator of the parametric derivative which is calculated by evaluating the coagulation equation at two different parameter values simultaneously, and causing variance reduction by maximising the covariance between these. The two different coupling strategies ('Single' and 'Double') have been compared to the case when there is no coupling ('Independent'). Both coupling algorithms converge and the Double coupling is the most 'efficient' algorithm. For the numerical example chosen we obtain a factor of about 100 in efficiency in the best case (small system evolution time and small parameter perturbation).

  2. Benzoporphyrin derivative and light-emitting diode for use in photodynamic therapy: Applications of space light-emitting diode technology

    Whelan, Harry T.; Houle, John M.; Bajic, Dawn M.; Schmidt, Meic H.; Reichert, Kenneth W. II; Meyer, Glenn A.

    1998-01-01

    Photodynamic therapy (PDT) is a cancer treatment modality that recently has been applied as adjuvant therapy for brain tumors. PDT consists of intravenously injecting a photosensitizer, which preferentially accumulates in tumor cells, into a patient and then activating the photosensitizer with a light source. This results in free radical generation followed by cell death. The development of more effective light sources for PDT of brain tumors has been facilitated by applications of space light-emitting diode array technology; thus permitting deeper tumor penetration of light and use of better photosensitizers. Currently, the most commonly used photosensitizer for brain tumor PDT is Photofrin registered . Photofrin registered is a heterogeneous mixture of compounds derived from hematoporphyrin. Photofrin registered is activated with a 630 nm laser light and does destroy tumor cells in animal models and humans. However, treatment failure does occur using this method. Most investigators attribute this failure to the limited penetration of brain tissue by a 630 nm laser light and to the fact that Photofrin registered has only a minor absorption peak at 630 nm, meaning that only a small fraction of the chemical is activated. Benzoporphyrin Derivative Monoacid Ring A (BPD) is a new, second generation photosensitizer that can potentially improve PDT for brain tumors. BPD has a major absorption peak at 690 nm, which gives it two distinct advantages over Photofrin registered . First, longer wavelengths of light penetrate brain tissue more easily so that larger tumors could be treated, and second, the major absorption peak means that a larger fraction of the drug is activated upon exposure to light. In the first part of this project we have studied the tumoricidal effects of BPD in vitro using 2A9 canine glioma and U373 human glioblastoma cell cultures. Using light emitting diodes (LED) with a peak emission of 688 nm as a light source, cell kill of up to 86 percent was

  3. Magnet sorting algorithms

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  4. Effective medium super-cell approximation for interacting disordered systems: an alternative real-space derivation of generalized dynamical cluster approximation

    Moradian, Rostam

    2006-01-01

    We develop a generalized real-space effective medium super-cell approximation (EMSCA) method to treat the electronic states of interacting disordered systems. This method is general and allows randomness both in the on-site energies and in the hopping integrals. For a non-interacting disordered system, in the special case of randomness in the on-site energies, this method is equivalent to the non-local coherent potential approximation (NLCPA) derived previously. Also, for an interacting system the EMSCA method leads to the real-space derivation of the generalized dynamical cluster approximation (DCA) for a general lattice structure. We found that the original DCA and the NLCPA are two simple cases of this technique, so the EMSCA is equivalent to the generalized DCA where there is included interaction and randomness in the on-site energies and in the hopping integrals. All of the equations of this formalism are derived by using the effective medium theory in real space

  5. Assessment of Protective Properties of Optimized Flagellin Derivative Against Biologically Harmful Effects of Ionizing Irradiation During Space Flight, Phase I

    National Aeronautics and Space Administration — The goal of this proposal is to explore a novel proprietary biopharmaceutical agent, named deltaFL-AA', a first in the series of innovative radioprotectors to act as...

  6. Recursive forgetting algorithms

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  7. ASAS modification of the Berlin algorithm for diagnosing axial spondyloarthritis: results from the SPondyloArthritis Caught Early (SPACE)-cohort and from the Assessment of SpondyloArthritis international Society (ASAS)-cohort

    van den Berg, Rosaline; de Hooge, Manouk; Rudwaleit, Martin; Sieper, Joachim; van Gaalen, Floris; Reijnierse, Monique; Landewé, Robert; Huizinga, Tom; van der Heijde, Désirée

    2013-01-01

    Objective To compare the original Berlin algorithm for diagnosing axial Spondyloarthritis (axSpA) with two modifications in the SPondyloArthritis Caught Early (SPACE)-cohort and the Assessment of SpondyloArthritis international Society (ASAS) axSpA criteria validation (ASAS)-cohort. Methods Patients

  8. Longitudinal Phase Space Tomography with Space Charge

    Hancock, S; Lindroos, M

    2000-01-01

    Tomography is now a very broad topic with a wealth of algorithms for the reconstruction of both qualitative and quantitative images. In an extension in the domain of particle accelerators, one of the simplest algorithms has been modified to take into account the non-linearity of large-amplitude synchrotron motion. This permits the accurate reconstruction of longitudinal phase space density from one-dimensional bunch profile data. The method is a hybrid one which incorporates particle tracking. Hitherto, a very simple tracking algorithm has been employed because only a brief span of measured profile data is required to build a snapshot of phase space. This is one of the strengths of the method, as tracking for relatively few turns relaxes the precision to which input machine parameters need to be known. The recent addition of longitudinal space charge considerations as an optional refinement of the code is described. Simplicity suggested an approach based on the derivative of bunch shape with the properties of...

  9. Algorithming the Algorithm

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  10. 26 CFR 1.863-8 - Source of income derived from space and ocean activity under section 863(d).

    2010-04-01

    ..., DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Regulations Applicable to Taxable... from sources without the United States to the extent the income, based on all the facts and... income derived by a CFC is income from sources without the United States to the extent the income, based...

  11. Derivation and validation of the Personal Support Algorithm: an evidence-based framework to inform allocation of personal support services in home and community care.

    Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P

    2017-11-25

    Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.

  12. Derivation and validation of the Personal Support Algorithm: an evidence-based framework to inform allocation of personal support services in home and community care

    Chi-Ling Joanna Sinn

    2017-11-01

    Full Text Available Abstract Background Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. Methods The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. Results The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. Conclusions The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.

  13. Generalized Grover's Algorithm for Multiple Phase Inversion States

    Byrnes, Tim; Forster, Gary; Tessler, Louis

    2018-02-01

    Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.

  14. Effect of space flight on meiosis of pollen mother cells and its derived pollens in impatiens balsamina

    Tang Zesheng; Yang Jun; Yuan Haiyun; Zhao Yan

    2005-01-01

    Effects of space flight on meiosis of pollen mother cells and meiosis of microspores in Impatiens balsamina were investigated. It was found that meiosis showed abnormal in most plants germinated from seeds after space flight, and chromosome fragment, chromosomal bridge and lagging chromosome were observed in the process of meiosis in these plants. Disproportional segregation of chromosome, multipolar division and multinucleus were also observed in most plants, which developed into paraspores with different chromosome number. Mitosis of microspores was found to be abnormal in most plants, and the number of chromosome in microspore unequal. The fertility of the pollens was tested with iodic solution; it was found that the fertility of pollens varied in different plants. (authors)

  15. A new parallel algorithm for simulation of spin glasses on scales of space-time periods of external fields with consideration of relaxation effects

    Gevorkyan, A.S.; Abajyan, H.G.

    2011-01-01

    We have investigated the statistical properties of an ensemble of disordered 1D spatial spin chains (SSCs) of finite length, placed in an external field, with consideration of relaxation effects. The short-range interaction complex-classical Hamiltonian was first used for solving this problem. A system of recurrent equations is obtained on the nodes of the spin-chain lattice. An efficient mathematical algorithm is developed on the basis of these equations with consideration of the advanced Sylvester conditions which allow step by step construct a huge number of stable spin chains in parallel. The distribution functions of different parameters of spin-glass system are constructed from the first principles of the complex classical mechanics by analyzing the calculation results of the 1D SSCs ensemble. It is shown that the behavior of the parameter distributions is quite different depending on the external fields. The energy ensembles and constants of spin-spin interactions are changed smoothly depending on the external field in the limit of statistical equilibrium, while some of them such as the mean value of polarizations of ensemble and parameters of its orderings are frustrated. We have also studied some critical properties of the ensemble of such catastrophes in the Clausius-Mossotti equation depending on the value of the external field. We have shown that the generalized complex-classical approach excludes these catastrophes allowing one to organize continuous parallel computing on the whole region of values of the external field including critical points. A new representation of the partition function based on these investigations is suggested. As opposed to usual definition, this function is a complex one and its derivatives are everywhere defined, including critical points

  16. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  17. Sound algorithms

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  18. Genetic algorithms

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  19. Evolution of the phase-space density and the Jeans scale for dark matter derived from the Vlasov-Einstein equation

    Piattella, O.F.; Rodrigues, D.C.; Fabris, J.C.; Pacheco, J.A. de Freitas

    2013-01-01

    We discuss solutions of Vlasov-Einstein equation for collisionless dark matter particles in the context of a flat Friedmann universe. We show that, after decoupling from the primordial plasma, the dark matter phase-space density indicator Q = ρ/(σ 1D 2 ) 3/2 remains constant during the expansion of the universe, prior to structure formation. This well known result is valid for non-relativistic particles and is not ''observer dependent'' as in solutions derived from the Vlasov-Poisson system. In the linear regime, the inclusion of velocity dispersion effects permits to define a physical Jeans length for collisionless matter as function of the primordial phase-space density indicator: λ J = (5π/G) 1/2 Q −1/3 ρ dm −1/6 . The comoving Jeans wavenumber at matter-radiation equality is smaller by a factor of 2-3 than the comoving wavenumber due to free-streaming, contributing to the cut-off of the density fluctuation power spectrum at the lowest scales. We discuss the physical differences between these two scales. For dark matter particles of mass equal to 200 GeV, the derived Jeans mass is 4.3 × 10 −6 M ⊙

  20. New Applications for Detecting Natural Hazards Using Ground and Space-Based GNSS-Derived Ionospheric Measurements

    Komjathy, A.; Butala, M.; Verkhoglyadova, O. P.; Wilson, B. D.; Iijima, B.; Akopian, V.; Mannucci, A.

    2012-12-01

    The NASA Jet Propulsion Laboratory (JPL) and University of Southern California (USC) have jointly developed the Global Assimilative Ionospheric Model (GAIM) to monitor space weather, study storm effects, and provide ionospheric calibration for various customers including NASA flight projects. JPL/USC GAIM is a physics-based 3D data assimilation model using 4DVAR and Kalman filter approaches to solve for ion and electron density states and other key ionospheric drivers. The JPL/USC GAIM technologies, now operating in real-time and post-processing modes, can routinely accept as input ground GPS TEC data from 1200+ sites including streaming and hourly GPS stations, occultation links from CHAMP, SAC-C, COSMIC and C/NOFS satellites, UV limb and nadir scans. In the presentation, first we will discuss recent advances in our assimilating ground-based GPS, C/NOFS and COSMIC occultation measurements using our GAIM system characterizing the ionosphere in 3D. We will elaborate on our improved space-based bias estimation techniques to generate high precision calibrated TEC measurements to be assimilated into GAIM. We will discuss the benefits of adding GLONASS measurements to our GIM and GAIM processing technologies. New and upcoming applications and first results will be shown for estimating very high precision TEC perturbations using real-time and post-processed GNSS observations from GEONET and IGS networks. We will demonstrate initial steps on how to integrate this GNSS ionosphere-based technology into a global tsunami warning system. Additional potential applications might include the remote sensing of ionospheric TEC perturbations generated by other natural hazards such as earthquakes and volcanic eruptions and human-made events such as nuclear tests.

  1. Seismic noise attenuation using an online subspace tracking algorithm

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  2. Exploration of a capability-focused aerospace system of systems architecture alternative with bilayer design space, based on RST-SOM algorithmic methods.

    Li, Zhifei; Qin, Dongliang; Yang, Feng

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.

  3. Volumetrically-Derived Global Navigation Satellite System Performance Assessment from the Earths Surface through the Terrestrial Service Volume and the Space Service Volume

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user from the Earth's surface through the Terrestrial Service Volume (TSV) to the edge of the Space Service Volume (SSV), when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative was recently expanded to compare nadir-facing and zenith-facing user hemispherical antenna coverage with omnidirectional antenna coverage at different distances of 8,000 km altitude and 36,000 km altitude. This report summarizes the performance using these antenna coverage techniques at distances ranging from 100 km altitude to 36,000 km to be all encompassing, as well as the volumetrically-derived system availability metrics.

  4. Second-order accurate volume-of-fluid algorithms for tracking material interfaces

    Pilliod, James Edward; Puckett, Elbridge Gerry

    2004-01-01

    We introduce two new volume-of-fluid interface reconstruction algorithms and compare the accuracy of these algorithms to four other widely used volume-of-fluid interface reconstruction algorithms. We find that when the interface is smooth (e.g., continuous with two continuous derivatives) the new methods are second-order accurate and the other algorithms are first-order accurate. We propose a design criteria for a volume-of-fluid interface reconstruction algorithm to be second-order accurate. Namely, that it reproduce lines in two space dimensions or planes in three space dimensions exactly. We also introduce a second-order, unsplit, volume-of-fluid advection algorithm that is based on a second-order, finite difference method for scalar conservation laws due to Bell, Dawson and Shubin. We test this advection algorithm by modeling several different interface shapes propagating in two simple incompressible flows and compare the results with the standard second-order, operator-split advection algorithm. Although both methods are second-order accurate when the interface is smooth, we find that the unsplit algorithm exhibits noticeably better resolution in regions where the interface has discontinuous derivatives, such as at corners

  5. Coronary CT angiography-derived fractional flow reserve correlated with invasive fractional flow reserve measurements - initial experience with a novel physician-driven algorithm

    Baumann, Stefan; Wang, Rui; Schoepf, U.J.; Steinberg, Daniel H.; Spearman, James V.; Bayer, Richard R.; Hamm, Christian W.; Renker, Matthias

    2015-01-01

    The present study aimed to determine the feasibility of a novel fractional flow reserve (FFR) algorithm based on coronary CT angiography (cCTA) that permits point-of-care assessment, without data transfer to core laboratories, for the evaluation of potentially ischemia-causing stenoses. To obtain CT-based FFR, anatomical coronary information and ventricular mass extracted from cCTA datasets were integrated with haemodynamic parameters. CT-based FFR was assessed for 36 coronary artery stenoses in 28 patients in a blinded fashion and compared to catheter-based FFR. Haemodynamically relevant stenoses were defined by an invasive FFR ≤0.80. Time was measured for the processing of each cCTA dataset and CT-based FFR computation. Assessment of cCTA image quality was performed using a 5-point scale. Mean total time for CT-based FFR determination was 51.9 ± 9.0 min. Per-vessel analysis for the identification of lesion-specific myocardial ischemia demonstrated good correlation (Pearson's product-moment r = 0.74, p < 0.0001) between the prototype CT-based FFR algorithm and invasive FFR. Subjective image quality analysis resulted in a median score of 4 (interquartile ranges, 3-4). Our initial data suggest that the CT-based FFR method for the detection of haemodynamically significant stenoses evaluated in the selected population correlates well with invasive FFR and renders time-efficient point-of-care assessment possible. (orig.)

  6. A structure-preserving method for a class of nonlinear dissipative wave equations with Riesz space-fractional derivatives

    Macías-Díaz, J. E.

    2017-12-01

    In this manuscript, we consider an initial-boundary-value problem governed by a (1 + 1)-dimensional hyperbolic partial differential equation with constant damping that generalizes many nonlinear wave equations from mathematical physics. The model considers the presence of a spatial Laplacian of fractional order which is defined in terms of Riesz fractional derivatives, as well as the inclusion of a generic continuously differentiable potential. It is known that the undamped regime has an associated positive energy functional, and we show here that it is preserved throughout time under suitable boundary conditions. To approximate the solutions of this model, we propose a finite-difference discretization based on fractional centered differences. Some discrete quantities are proposed in this work to estimate the energy functional, and we show that the numerical method is capable of conserving the discrete energy under the same boundary conditions for which the continuous model is conservative. Moreover, we establish suitable computational constraints under which the discrete energy of the system is positive. The method is consistent of second order, and is both stable and convergent. The numerical simulations shown here illustrate the most important features of our numerical methodology.

  7. Evidence of quantum phase transition in real-space vacuum entanglement of higher derivative scalar quantum field theories.

    Kumar, S Santhosh; Shankaranarayanan, S

    2017-11-17

    In a bipartite set-up, the vacuum state of a free Bosonic scalar field is entangled in real space and satisfies the area-law- entanglement entropy scales linearly with area of the boundary between the two partitions. In this work, we show that the area law is violated in two spatial dimensional model Hamiltonian having dynamical critical exponent z = 3. The model physically corresponds to next-to-next-to-next nearest neighbour coupling terms on a lattice. The result reported here is the first of its kind of violation of area law in Bosonic systems in higher dimensions and signals the evidence of a quantum phase transition. We provide evidence for quantum phase transition both numerically and analytically using quantum Information tools like entanglement spectra, quantum fidelity, and gap in the energy spectra. We identify the cause for this transition due to the accumulation of large number of angular zero modes around the critical point which catalyses the change in the ground state wave function due to the next-to-next-to-next nearest neighbor coupling. Lastly, using Hubbard-Stratanovich transformation, we show that the effective Bosonic Hamiltonian can be obtained from an interacting fermionic theory and provide possible implications for condensed matter systems.

  8. Derivation of Human Chromatic Discrimination Ability from an Information-Theoretical Notion of Distance in Color Space.

    da Fonseca, María; Samengo, Inés

    2016-12-01

    The accuracy with which humans detect chromatic differences varies throughout color space. For example, we are far more precise when discriminating two similar orange stimuli than two similar green stimuli. In order for two colors to be perceived as different, the neurons representing chromatic information must respond differently, and the difference must be larger than the trial-to-trial variability of the response to each separate color. Photoreceptors constitute the first stage in the processing of color information; many more stages are required before humans can consciously report whether two stimuli are perceived as chromatically distinguishable. Therefore, although photoreceptor absorption curves are expected to influence the accuracy of conscious discriminability, there is no reason to believe that they should suffice to explain it. Here we develop information-theoretical tools based on the Fisher metric that demonstrate that photoreceptor absorption properties explain about 87% of the variance of human color discrimination ability, as tested by previous behavioral experiments. In the context of this theory, the bottleneck in chromatic information processing is determined by photoreceptor absorption characteristics. Subsequent encoding stages modify only marginally the chromatic discriminability at the photoreceptor level.

  9. Algorithmic cryptanalysis

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  10. Validation of CALIPSO space-borne-derived attenuated backscatter coefficient profiles using a ground-based lidar in Athens, Greece

    R. E. Mamouri

    2009-09-01

    Full Text Available We present initial aerosol validation results of the space-borne lidar CALIOP -onboard the CALIPSO satellite- Level 1 attenuated backscatter coefficient profiles, using coincident observations performed with a ground-based lidar in Athens, Greece (37.9° N, 23.6° E. A multi-wavelength ground-based backscatter/Raman lidar system is operating since 2000 at the National Technical University of Athens (NTUA in the framework of the European Aerosol Research LIdar NETwork (EARLINET, the first lidar network for tropospheric aerosol studies on a continental scale. Since July 2006, a total of 40 coincidental aerosol ground-based lidar measurements were performed over Athens during CALIPSO overpasses. The ground-based measurements were performed each time CALIPSO overpasses the station location within a maximum distance of 100 km. The duration of the ground–based lidar measurements was approximately two hours, centred on the satellite overpass time. From the analysis of the ground-based/satellite correlative lidar measurements, a mean bias of the order of 22% for daytime measurements and of 8% for nighttime measurements with respect to the CALIPSO profiles was found for altitudes between 3 and 10 km. The mean bias becomes much larger for altitudes lower that 3 km (of the order of 60% which is attributed to the increase of aerosol horizontal inhomogeneity within the Planetary Boundary Layer, resulting to the observation of possibly different air masses by the two instruments. In cases of aerosol layers underlying Cirrus clouds, comparison results for aerosol tropospheric profiles become worse. This is attributed to the significant multiple scattering effects in Cirrus clouds experienced by CALIPSO which result in an attenuation which is less than that measured by the ground-based lidar.

  11. Space space space

    Trembach, Vera

    2014-01-01

    Space is an introduction to the mysteries of the Universe. Included are Task Cards for independent learning, Journal Word Cards for creative writing, and Hands-On Activities for reinforcing skills in Math and Language Arts. Space is a perfect introduction to further research of the Solar System.

  12. Algorithmic mathematics

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  13. Gels and gel-derived glasses in the Na2O-B2O3-SiO2 system. [containerless melting in space

    Mukherjee, S. P.

    1982-01-01

    The containerless melting of high-purity multicomponent homogeneous gels and gel-monoliths offers a unique approach to making ultrapure multicomponent optical glasses in the reduced gravity environment of space. Procedures for preparing and characterizing gels and gel-derived glasses in the Na2O-B2O3-SiO2 system are described. Preparation is based on the polymerization reactions of alkoxysilane with trimethyl borate or boric acid and a suitable sodium compound. The chemistry of the gelling process is discussed in terms of process parameters and the gel compositions. The physicochemical nature of gels prepared by three different procedures were found to be significantly different. IR absorption spectra indicate finite differences in the molecular structures of the different gels. The melting of the gel powders and the transformation of porous gel-monoliths to transparent 'glass' without melting are described.

  14. 基于Teager-Kaiser算子的改进波束域MUSIC时延估计算法%An improved beam-space MUSIC time delay estimation algorithm based on TK operator

    周非; 王路凯; 范馨月

    2011-01-01

    为了降低波束域多重信号分类(multiple signal classification,MUSIC)算法估计接收信号到达时问(time of arrival,TOA)的计算复杂度,提高算法的抗噪性能,提出一种基于TK算子(Teager-Kaiser operator)的改进算法.利用TK算子对数据瞬时变化敏感的特性,将接收信号与参考信号的相关函数经过TK算子处理,估计出波束域转换矩阵和波束域输出数据,再用MUSIC时延估计算法估计TOA.仿真结果说明,该方法比波束域MUSIC时延估计算法计算量小,并更好地抑制了多径信号噪声影响,高分辨率的估计性能得到了明显改善.%In order to reduce the calculated quantity and improve the function of noise of beam-space multiple signal classification (MUSIC)time delay estimation, an improved algorithm has been presented in this paper. According to the sensitive characteristics of the data's instantaneous change, Teager-Kaiser(TK)operator processes the correlation function of the received and reference signal. Then the beam-space transformation matrix and the beam-space output data can be estimated from the processing result. After that, the paper estimates the time of arrival(TOA) with MUSIC algorithm. Stimulation results show that this method was better than the beam-space MUSIC time delay algorithm. The performance of high-resolution estimate has been improved.

  15. Total algorithms

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  16. Validation of ozone profile retrievals derived from the OMPS LP version 2.5 algorithm against correlative satellite measurements

    N. A. Kramarova

    2018-05-01

    Full Text Available The Limb Profiler (LP is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS, Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS. We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing

  17. A Cloud Top Pressure Algorithm for DSCOVR-EPIC

    Min, Q.; Morgan, E. C.; Yang, Y.; Marshak, A.; Davis, A. B.

    2017-12-01

    The Earth Polychromatic Imaging Camera (EPIC) sensor on the Deep Space Climate Observatory (DSCOVR) satellite presents unique opportunities to derive cloud properties of the entire daytime Earth. In particular, the Oxygen A- and B-band and corresponding reference channels provide cloud top pressure information. In order to address the in-cloud penetration depth issue—and ensuing retrieval bias—a comprehensive sensitivity study has been conducted to simulate satellite-observed radiances for a wide variety of cloud structures and optical properties. Based on this sensitivity study, a cloud top pressure algorithm for DSCOVR-EPIC has been developed. Further, the algorithm has been applied to EPIC measurements.

  18. Algorithmic chemistry

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  19. Determining frequentist confidence limits using a directed parameter space search

    Daniel, Scott F.; Connolly, Andrew J.; Schneider, Jeff

    2014-01-01

    We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the seven yr Wilkinson Microwave Anisotropy Probe cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.

  20. Composite Differential Search Algorithm

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  1. A sub-cubic time algorithm for computing the quartet distance between two general trees

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  2. The algorithm of self-interaction processes in quantum electrodynamics based on the causal Green functions ordered in time and space

    Cernobai, V. A.

    2004-01-01

    An algorithm based on the analytical extension and application of the contour integration method is presented, as well as the example of the vacuum polarization effect calculation. The analytical extensions are required to respect the symmetry relationships specific to real physical processes. Only integrals with finite values of generalized analytic functions (distributions) are used in all calculation stages. (author)

  3. Geometric differential evolution for combinatorial and programs spaces.

    Moraglio, A; Togelius, J; Silva, S

    2013-01-01

    Geometric differential evolution (GDE) is a recently introduced formal generalization of traditional differential evolution (DE) that can be used to derive specific differential evolution algorithms for both continuous and combinatorial spaces retaining the same geometric interpretation of the dynamics of the DE search across representations. In this article, we first review the theory behind the GDE algorithm, then, we use this framework to formally derive specific GDE for search spaces associated with binary strings, permutations, vectors of permutations and genetic programs. The resulting algorithms are representation-specific differential evolution algorithms searching the target spaces by acting directly on their underlying representations. We present experimental results for each of the new algorithms on a number of well-known problems comprising NK-landscapes, TSP, and Sudoku, for binary strings, permutations, and vectors of permutations. We also present results for the regression, artificial ant, parity, and multiplexer problems within the genetic programming domain. Experiments show that overall the new DE algorithms are competitive with well-tuned standard search algorithms.

  4. Automatic differentiation algorithms in model analysis

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  5. Deriving a light use efficiency estimation algorithm using in situ hyperspectral and eddy covariance measurements for a maize canopy in Northeast China.

    Zhang, Feng; Zhou, Guangsheng

    2017-07-01

    We estimated the light use efficiency ( LUE ) via vegetation canopy chlorophyll content ( CCC canopy ) based on in situ measurements of spectral reflectance, biophysical characteristics, ecosystem CO 2 fluxes and micrometeorological factors over a maize canopy in Northeast China. The results showed that among the common chlorophyll-related vegetation indices (VIs), CCC canopy had the most obviously exponential relationships with the red edge position (REP) ( R 2  = .97, p  <   .001) and normalized difference vegetation index (NDVI) ( R 2  = .91, p  <   .001). In a comparison of the indicating performances of NDVI, ratio vegetation index (RVI), wide dynamic range vegetation index (WDRVI), and 2-band enhanced vegetation index (EVI2) when estimating CCC canopy using all of the possible combinations of two separate wavelengths in the range 400-1300 nm, EVI2 [1214, 1259] and EVI2 [726, 1248] were better indicators, with R 2 values of .92 and .90 ( p  <   .001). Remotely monitoring LUE through estimating CCC canopy derived from field spectrometry data provided accurate prediction of midday gross primary productivity ( GPP ) in a rainfed maize agro-ecosystem ( R 2  = .95, p  <   .001). This study provides a new paradigm for monitoring vegetation GPP based on the combination of LUE models with plant physiological properties.

  6. de Casteljau's Algorithm Revisited

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  7. Efficient graph algorithms

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  8. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  9. Utilization of O4 Slant Column Density to Derive Aerosol Layer Height from a Space-Borne UV-Visible Hyperspectral Sensor: Sensitivity and Case Study

    Park, Sang Seo; Kim, Jhoon; Lee, Hanlim; Torres, Omar; Lee, Kwang-Mog; Lee, Sang Deok

    2016-01-01

    The sensitivities of oxygen-dimer (O4) slant column densities (SCDs) to changes in aerosol layer height are investigated using the simulated radiances by a radiative transfer model, the linearized pseudo-spherical vector discrete ordinate radiative transfer (VLIDORT), and the differential optical absorption spectroscopy (DOAS) technique. The sensitivities of the O4 index (O4I), which is defined as dividing O4 SCD by 10(sup 40) molecules (sup 2) per centimeters(sup -5), to aerosol types and optical properties are also evaluated and compared. Among the O4 absorption bands at 340, 360, 380, and 477 nanometers, the O4 absorption band at 477 nanometers is found to be the most suitable to retrieve the aerosol effective height. However, the O4I at 477 nanometers is significantly influenced not only by the aerosol layer effective height but also by aerosol vertical profiles, optical properties including single scattering albedo (SSA), aerosol optical depth (AOD), particle size, and surface albedo. Overall, the error of the retrieved aerosol effective height is estimated to be 1276, 846, and 739 meters for dust, non-absorbing, and absorbing aerosol, respectively, assuming knowledge on the aerosol vertical distribution shape. Using radiance data from the Ozone Monitoring Instrument (OMI), a new algorithm is developed to derive the aerosol effective height over East Asia after the determination of the aerosol type and AOD from the MODerate resolution Imaging Spectroradiometer (MODIS). About 80 percent of retrieved aerosol effective heights are within the error range of 1 kilometer compared to those obtained from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) measurements on thick aerosol layer cases.

  10. Quantum algorithms and learning theory

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  11. An Aerosol Extinction-to-Backscatter Ratio Database Derived from the NASA Micro-Pulse Lidar Network: Applications for Space-based Lidar Observations

    Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony

    2004-01-01

    Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.

  12. Agency and Algorithms

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  13. An optimal iterative algorithm to solve Cauchy problem for Laplace equation

    Majeed, Muhammad Usman

    2015-05-25

    An optimal mean square error minimizer algorithm is developed to solve severely ill-posed Cauchy problem for Laplace equation on an annulus domain. The mathematical problem is presented as a first order state space-like system and an optimal iterative algorithm is developed that minimizes the mean square error in states. Finite difference discretization schemes are used to discretize first order system. After numerical discretization algorithm equations are derived taking inspiration from Kalman filter however using one of the space variables as a time-like variable. Given Dirichlet and Neumann boundary conditions are used on the Cauchy data boundary and fictitious points are introduced on the unknown solution boundary. The algorithm is run for a number of iterations using the solution of previous iteration as a guess for the next one. The method developed happens to be highly robust to noise in Cauchy data and numerically efficient results are illustrated.

  14. In-Place Algorithms for Computing (Layers of) Maxima

    Blunck, Henrik; Vahrenhold, Jan

    2010-01-01

    We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal time and occupy only constant extra......We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal time and occupy only constant extra...

  15. The diagnosis of urinary tract infections in young children (DUTY: protocol for a diagnostic and prospective observational study to derive and validate a clinical algorithm for the diagnosis of UTI in children presenting to primary care with an acute illness

    Downing Harriet

    2012-07-01

    Full Text Available Abstract Background Urinary tract infection (UTI is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. Methods/design DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted. The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens. We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results most strongly associated with a positive urine culture result. We will

  16. The diagnosis of urinary tract infections in young children (DUTY): protocol for a diagnostic and prospective observational study to derive and validate a clinical algorithm for the diagnosis of UTI in children presenting to primary care with an acute illness.

    Downing, Harriet; Thomas-Jones, Emma; Gal, Micaela; Waldron, Cherry-Ann; Sterne, Jonathan; Hollingworth, William; Hood, Kerenza; Delaney, Brendan; Little, Paul; Howe, Robin; Wootton, Mandy; Macgowan, Alastair; Butler, Christopher C; Hay, Alastair D

    2012-07-19

    Urinary tract infection (UTI) is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY) study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted.The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens.We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results) most strongly associated with a positive urine culture result. We will then use economic evaluation to compare the cost

  17. Variation in primary and culture-expanded cells derived from connective tissue progenitors in human bone marrow space, bone trabecular surface and adipose tissue.

    Qadan, Maha A; Piuzzi, Nicolas S; Boehm, Cynthia; Bova, Wesley; Moos, Malcolm; Midura, Ronald J; Hascall, Vincent C; Malcuit, Christopher; Muschler, George F

    2018-03-01

    Connective tissue progenitors (CTPs) embody the heterogeneous stem and progenitor cell populations present in native tissue. CTPs are essential to the formation and remodeling of connective tissue and represent key targets for tissue-engineering and cell-based therapies. To better understand and characterize CTPs, we aimed to compare the (i) concentration and prevalence, (ii) early in vitro biological behavior and (iii) expression of surface-markers and transcription factors among cells derived from marrow space (MS), trabecular surface (TS), and adipose tissues (AT). Cancellous-bone and subcutaneous-adipose tissues were collected from 8 patients. Cells were isolated and cultured. Colony formation was assayed using Colonyze software based on ASTM standards. Cell concentration ([Cell]), CTP concentration ([CTP]) and CTP prevalence (P CTP ) were determined. Attributes of culture-expanded cells were compared based on (i) effective proliferation rate and (ii) expression of surface-markers CD73, CD90, CD105, SSEA-4, SSEA-3, SSEA-1/CD15, Cripto-1, E-Cadherin/CD324, Ep-CAM/CD326, CD146, hyaluronan and transcription factors Oct3/4, Sox-2 and Nanog using flow cytometry. Mean [Cell], [CTP] and P CTP were significantly different between MS and TS samples (P = 0.03, P = 0.008 and P= 0.0003), respectively. AT-derived cells generated the highest mean total cell yield at day 6 of culture-4-fold greater than TS and more than 40-fold greater than MS per million cells plated. TS colonies grew with higher mean density than MS colonies (290 ± 11 versus 150 ± 11 cell per mm 2 ; P = 0.0002). Expression of classical-mesenchymal stromal cell (MSC) markers was consistently recorded (>95%) from all tissue sources, whereas all the other markers were highly variable. The prevalence and biological potential of CTPs are different between patients and tissue sources and lack variation in classical MSC markers. Other markers are more likely to discriminate differences

  18. Algorithmic complexity of quantum capacity

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  19. A superlinear interior points algorithm for engineering design optimization

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  20. Formal derivation of a stable marriage algorithm.

    Bijlsma, A.

    1991-01-01

    In this paper the well-known Stable Marriage Problem is considered once again. The name of this programming problem comes from the terms in which it was first described [2]: A certain community consists of n men and n women. Each person ranks those of the opposite sex in accordance with his or

  1. A steady state solution for ditch drainage problem with special reference to seepage face and unsaturated zone flow contribution: Derivation of a new drainage spacing eqaution

    Yousfi, Ammar; Mechergui, Mohammed

    2016-04-01

    al. (2001). In this work, a novel solution based on theoretical approach will be adapted to incorporate both the seepage face and the unsaturated zone flow contribution for solving ditch drained aquifers problems. This problem will be tackled on the basis of the approximate 2D solution given by Castro-Orgaz et al. (2012). This given solution yields the generalized water table profile function with a suitable boundary condition to be determined and provides a modified DF theory which permits as an outcome the analytical determination of the seepage face. To assess the ability of the developed equation for water-table estimations, the obtained results were compared with numerical solutions to the 2-D problem under different conditions. It is shown that results are in fair agreement and thus the resulting model can be used for designing ditch drainage systems. With respect to drainage design, the spacings calculated with the newly derived equation are compared with those computed from the DF theory. It is shown that the effect of the unsaturated zone flow contribution is limited to sandy soils and The calculated maximum increase in drain spacing is about 30%. Keywords: subsurface ditch drainage; unsaturated zone; seepage face; water-table, ditch spacing equation

  2. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); hide

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  3. Genetic algorithm essentials

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  4. Masticator space abscess derived from odontogenic infection: imaging manifestation and pathways of extension depicted by CT and MR in 30 patients

    Schuknecht, B.; Stergiou, G.; Graetz, K.

    2008-01-01

    Propagation of odontogenic masticator space abscesses is insufficiently understood. The purpose was to analyse pathways of spread in 30 patients with odontogenic masticator space abscess. The imaging findings in 30 patients (CT in 30, MR in 16 patients) were retrospectively analysed. CT and MR imaging depicted a masticator space abscess within: medial pterygoid muscle in 13 patients (43.3%), lateral masseter and/or pterygoid muscle in 14 (46.7%) and superficial temporal muscle in 3 patients (10%). In the lateral masticator space intra-spatial abscess extension occurred in 7 of 14 patients (50%). The sub-masseteric space provided a pathway in seven (70%). Extra-spatial extension involved the submandibular space only in 3 of 14 patients (21.4%). Medial masticator space abscesses exhibited extra-spatial spread only. Extension affected the parapharyngeal space and/or soft palate in 7 of 13 lesions (53.8%). MR imaging in comparison to CT increased the number of abscess locations from 18 to 23 (27.8%) and regions affected by a cellular infiltrate from 12 to 16 (33.3%). The sub-masseteric space served as a previously underestimated pathway for intra-spatial propagation of lateral masticator abscesses. Medial masticator space abscesses tend to display early extra-spatial parapharyngeal space and/or soft palate extension. (orig.)

  5. Chebyshev blossoming in Müntz spaces: Toward shaping with Young diagrams

    Ait-Haddou, Rachid

    2013-08-01

    The notion of a blossom in extended Chebyshev spaces offers adequate generalizations and extra-utilities to the tools for free-form design schemes. Unfortunately, such advantages are often overshadowed by the complexity of the resulting algorithms. In this work, we show that for the case of Müntz spaces with integer exponents, the notion of a Chebyshev blossom leads to elegant algorithms whose complexities are embedded in the combinatorics of Schur functions. We express the blossom and the pseudo-affinity property in Müntz spaces in terms of Schur functions. We derive an explicit expression for the Chebyshev-Bernstein basis via an inductive argument on nested Müntz spaces. We also reveal a simple algorithm for dimension elevation. Free-form design schemes in Müntz spaces with Young diagrams as shape parameters are discussed. © 2013 Elsevier Ltd. All rights reserved.

  6. Chebyshev blossoming in Müntz spaces: Toward shaping with Young diagrams

    Ait-Haddou, Rachid; Sakane, Yusuke; Nomura, Taishin

    2013-01-01

    The notion of a blossom in extended Chebyshev spaces offers adequate generalizations and extra-utilities to the tools for free-form design schemes. Unfortunately, such advantages are often overshadowed by the complexity of the resulting algorithms. In this work, we show that for the case of Müntz spaces with integer exponents, the notion of a Chebyshev blossom leads to elegant algorithms whose complexities are embedded in the combinatorics of Schur functions. We express the blossom and the pseudo-affinity property in Müntz spaces in terms of Schur functions. We derive an explicit expression for the Chebyshev-Bernstein basis via an inductive argument on nested Müntz spaces. We also reveal a simple algorithm for dimension elevation. Free-form design schemes in Müntz spaces with Young diagrams as shape parameters are discussed. © 2013 Elsevier Ltd. All rights reserved.

  7. Analysis of the unstressed lattice spacing, d0, for the determination of the residual stress in a friction stir welded plate of an age-hardenable aluminum alloy – Use of equilibrium conditions and a genetic algorithm

    Cioffi, F.; Hidalgo, J.I.; Fernández, R.; Pirling, T.; Fernández, B.; Gesto, D.; Puente Orench, I.; Rey, P.; González-Doncel, G.

    2014-01-01

    Procedures based on equilibrium conditions (stress and bending moment) have been used to obtain an unstressed lattice spacing, d 0 , as a crucial requirement for calculating the residual stress (RS) profile across a joint conducted on a 10 mm thick plate of age-hardenable AA2024 alloy by friction stir welding (FSW). Two procedures have been used that take advantage of neutron diffraction measurements. First, equilibrium conditions were imposed on sections parallel to the weld so that a constant d 0 value corresponding to the base material region could be calculated analytically. Second, balance conditions were imposed on a section transverse to the weld. Then, using the data and a genetic algorithm, suitable d 0 values for the different regions of the weld have been found. For several reasons, the comb method has proved to be inappropriate for RS determination in the case of age-hardenable alloys. However, the equilibrium conditions, together with the genetic algorithm, has been shown to be very suitable for determining RS profiles in FSW joints of these alloys, where inherent microstructural variations of d 0 across the weld are expected

  8. Algorithmic alternatives

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  9. Combinatorial algorithms

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  10. Autodriver algorithm

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  11. A 3D-Space Vector Modulation Algorithm for Three Phase Four Wire Neutral Point Clamped Inverter Systems as Power Quality Compensator

    Palanisamy Ramasamy

    2017-11-01

    Full Text Available A Unified Power Quality Conditioner (UPQC is designed using a Neutral Point Clamped (NPC multilevel inverter to improve the power quality. When designed for high/medium voltage and power applications, the voltage stress across the switches and harmonic content in the output voltage are increased. A 3-phase 4-wire NPC inverter system is developed as Power Quality Conditioner using an effectual three dimensional Space Vector Modulation (3D-SVM technique. The proposed system behaves like a UPQC with shunt and series active filter under balanced and unbalanced loading conditions. In addition to the improvement of the power quality issues, it also balances the neutral point voltage and voltage balancing across the capacitors under unbalanced condition. The hardware and simulation results of proposed system are compared with 2D-SVM and 3D-SVM. The proposed system is stimulated using MATLAB and the hardware is designed using FPGA. From the results it is evident that effectual 3D-SVM technique gives better performance compared to other control methods.

  12. The Quality Control Algorithms Used in the Process of Creating the NASA Kennedy Space Center Lightning Protection System Towers Meteorological Database

    Orcutt, John M.; Brenton, James C.

    2016-01-01

    The methodology and the results of the quality control (QC) process of the meteorological data from the Lightning Protection System (LPS) towers located at Kennedy Space Center (KSC) launch complex 39B (LC-39B) are documented in this paper. Meteorological data are used to design a launch vehicle, determine operational constraints, and to apply defined constraints on day-of-launch (DOL). In order to properly accomplish these tasks, a representative climatological database of meteorological records is needed because the database needs to represent the climate the vehicle will encounter. Numerous meteorological measurement towers exist at KSC; however, the engineering tasks need measurements at specific heights, some of which can only be provided by a few towers. Other than the LPS towers, Tower 313 is the only tower that provides observations up to 150 m. This tower is located approximately 3.5 km from LC-39B. In addition, data need to be QC'ed to remove erroneous reports that could pollute the results of an engineering analysis, mislead the development of operational constraints, or provide a false image of the atmosphere at the tower's location.

  13. Trace spaces

    Fajstrup, Lisbeth; Goubault, Eric; Haucourt, Emmanuel

    2012-01-01

    in the interleaving semantics of a concurrent program, but rather some equivalence classes. The purpose of this paper is to describe a new algorithm to compute such equivalence classes, and a representative per class, which is based on ideas originating in algebraic topology. We introduce a geometric semantics...... of concurrent languages, where programs are interpreted as directed topological spaces, and study its properties in order to devise an algorithm for computing dihomotopy classes of execution paths. In particular, our algorithm is able to compute a control-flow graph for concurrent programs, possibly containing...... loops, which is “as reduced as possible” in the sense that it generates traces modulo equivalence. A preliminary implementation was achieved, showing promising results towards efficient methods to analyze concurrent programs, with very promising results compared to partial-order reduction techniques....

  14. Survey on nonlocal games and operator space theory

    Palazuelos, Carlos; Vidick, Thomas

    2016-01-01

    This review article is concerned with a recently uncovered connection between operator spaces, a noncommutative extension of Banach spaces, and quantum nonlocality, a striking phenomenon which underlies many of the applications of quantum mechanics to information theory, cryptography, and algorithms. Using the framework of nonlocal games, we relate measures of the nonlocality of quantum mechanics to certain norms in the Banach and operator space categories. We survey recent results that exploit this connection to derive large violations of Bell inequalities, study the complexity of the classical and quantum values of games and their relation to Grothendieck inequalities, and quantify the nonlocality of different classes of entangled states

  15. Survey on nonlocal games and operator space theory

    Palazuelos, Carlos, E-mail: cpalazue@mat.ucm.es [Instituto de Ciencias Matemáticas (ICMAT), Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Madrid (Spain); Vidick, Thomas, E-mail: vidick@cms.caltech.edu [Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, California 91125 (United States)

    2016-01-15

    This review article is concerned with a recently uncovered connection between operator spaces, a noncommutative extension of Banach spaces, and quantum nonlocality, a striking phenomenon which underlies many of the applications of quantum mechanics to information theory, cryptography, and algorithms. Using the framework of nonlocal games, we relate measures of the nonlocality of quantum mechanics to certain norms in the Banach and operator space categories. We survey recent results that exploit this connection to derive large violations of Bell inequalities, study the complexity of the classical and quantum values of games and their relation to Grothendieck inequalities, and quantify the nonlocality of different classes of entangled states.

  16. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  17. A retrodictive stochastic simulation algorithm

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  18. Structural optimization via a design space hierarchy

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  19. Fast numerical algorithm for the linear canonical transform.

    Hennelly, Bryan M; Sheridan, John T

    2005-05-01

    The linear canonical transform (LCT) describes the effect of any quadratic phase system (QPS) on an input optical wave field. Special cases of the LCT include the fractional Fourier transform (FRT), the Fourier transform (FT), and the Fresnel transform (FST) describing free-space propagation. Currently there are numerous efficient algorithms used (for purposes of numerical simulation in the area of optical signal processing) to calculate the discrete FT, FRT, and FST. All of these algorithms are based on the use of the fast Fourier transform (FFT). In this paper we develop theory for the discrete linear canonical transform (DLCT), which is to the LCT what the discrete Fourier transform (DFT) is to the FT. We then derive the fast linear canonical transform (FLCT), an N log N algorithm for its numerical implementation by an approach similar to that used in deriving the FFT from the DFT. Our algorithm is significantly different from the FFT, is based purely on the properties of the LCT, and can be used for FFT, FRT, and FST calculations and, in the most general case, for the rapid calculation of the effect of any QPS.

  20. Controller tuning based on optimization algorithms of a novel spherical rolling robot

    Sadegjian, Rasou [Dept. of Electrical, Biomedical, and Mechatronics Engineering, Qazvin Branch, Islamic Azad University, QazvinI (Iran, Islamic Republic of); Masouleh, Mehdi Tale [Human and Robot Interaction Laboratory, Faculty of New Sciences and Technologies, University of Tehran, Tehran (Iran, Islamic Republic of)

    2016-11-15

    This study presents the construction process of a novel spherical rolling robot and control strategies that are used to improve robot locomotion. The proposed robot drive mechanism is constructed based on a combination of the pendulum and wheel drive mechanisms. The control model of the proposed robot is developed, and the state space model is calculated based on the obtained control model. Two control strategies are defined to improve the synchronization performance of the proposed robot motors. The proportional-derivative and proportional-integral-derivative controllers are designed based on the pole placement method. The proportional-integral-derivative controller leads to a better step response than the proportional-derivative controller. The controller parameters are tuned with genetic and differential evaluation algorithms. The proportional-integral-derivative controller which is tuned based on the differential evaluation algorithm leads to a better step response than the proportional-integral-derivative controller that is tuned based on genetic algorithm. Fuzzy logics are used to reduce the robot drive mechanism motors synchronizing process time to the end of achieving a high-performance controller. The experimental implementation results of fuzzy-proportional-integral-derivative on the proposed spherical rolling robot resulted in a desirable synchronizing performance in a short time.

  1. Controller tuning based on optimization algorithms of a novel spherical rolling robot

    Sadegjian, Rasou; Masouleh, Mehdi Tale

    2016-01-01

    This study presents the construction process of a novel spherical rolling robot and control strategies that are used to improve robot locomotion. The proposed robot drive mechanism is constructed based on a combination of the pendulum and wheel drive mechanisms. The control model of the proposed robot is developed, and the state space model is calculated based on the obtained control model. Two control strategies are defined to improve the synchronization performance of the proposed robot motors. The proportional-derivative and proportional-integral-derivative controllers are designed based on the pole placement method. The proportional-integral-derivative controller leads to a better step response than the proportional-derivative controller. The controller parameters are tuned with genetic and differential evaluation algorithms. The proportional-integral-derivative controller which is tuned based on the differential evaluation algorithm leads to a better step response than the proportional-integral-derivative controller that is tuned based on genetic algorithm. Fuzzy logics are used to reduce the robot drive mechanism motors synchronizing process time to the end of achieving a high-performance controller. The experimental implementation results of fuzzy-proportional-integral-derivative on the proposed spherical rolling robot resulted in a desirable synchronizing performance in a short time

  2. Algorithmic Self

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  3. Array architectures for iterative algorithms

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  4. Enhanced sampling algorithms.

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  5. A time reversal algorithm in acoustic media with Dirac measure approximations

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  6. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  7. A polylogarithmic competitive algorithm for the k-server problem

    Bansal, N.; Buchbinder, N.; Madry, A.; Naor, J.

    2011-01-01

    We give the first polylogarithmic-competitive randomized online algorithm for the $k$-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(log^3 n log^2 k log log n) for any metric space on n points. Our algorithm improves upon the

  8. 基于MUSIC-Group Delay算法的相邻相干信号源定位%Closely spaced coherent-source localization based on MUSIC-group delay algorithm

    郑家芝

    2016-01-01

    为了准确的进行相邻的相干信号源定位,提出了一种基于多重信号分类群延迟(MUSIC-group delay)的改进算法。首先,将空间平滑技术引入到波达方向(DoA)估计当中去除部分相干信号。由于在信号源相邻的情况下子空间算法的性能降低,就结合了 MUSIC-Group Delay算法来区分相邻的信号源,这种方法因为自身的加和性通过 MUSIC 相位谱来计算群延迟函数,从而能估计出相邻的信号源。理论分析和仿真结果表明提出的方法估计相邻的相干信号源比子空间算法更精确,分辨率更高。%In this paper,the closely spaced coherent-source localization is considered,and an improved method based on the group delay of Multiple Signal Classification (MUSIC)is presented.Firstly,we introduce the spatial smoothing technique into direction of arrival (DoA)estimation to get rid of the coherent part of signals.Due to the degraded per-formance of sub-space based methods on the condition of nearby sources,we then utilize the MUSIC-Group Delay algo-rithm to distinguish the closely spaced sources,which can resolve spatially close sources by the use of the group delay function computed from the MUSIC phase spectrum for efficient DoA estimation owing to its spatial additive property. Theoretical analysis and simulation results demonstrate that the proposed approach can estimate the DoA of the coherent close signal sources more precisely and have higher resolution compared with sub-space based methods.

  9. Parallel algorithms

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  10. Algorithm 865

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  11. Application of Genetic Algorithms in Seismic Tomography

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet; Papazachos, Constantinos

    2010-05-01

    In the earth sciences several inverse problems that require data fitting and parameter estimation are nonlinear and can involve a large number of unknown parameters. Consequently, the application of analytical inversion or optimization techniques may be quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem in question, adopting an iterative procedure using partial derivatives to improve an initial model. This approach can lead to a dependence of the final model solution on the starting model and is prone to entrapment in local misfit minima. Moreover, the calculation of derivatives can be computationally inefficient and create instabilities when numerical approximations are used. In contrast to these local minimization methods, global techniques that do not rely on partial derivatives, are independent of the form of the data misfit criterion, and are computationally robust. Such methods often use random processes to sample a selected wider span of the model space. In this situation, randomly generated models are assessed in terms of their data-fitting quality and the process may be stopped after a certain number of acceptable models is identified or continued until a satisfactory data fit is achieved. A new class of methods known as genetic algorithms achieves the aforementioned approximation through novel model representation and manipulations. Genetic algorithms (GAs) were originally developed in the field of artificial intelligence by John Holland more than 20 years ago, but even in this field it is less than a decade that the methodology has been more generally applied and only recently did the methodology attract the attention of the earth sciences community. Applications have been generally concentrated in geophysics and in particular seismology. As awareness of genetic algorithms grows there surely will be many more and varied applications to earth science problems. In the present work, the

  12. Application of diet-derived taste active components for clinical nutrition: perspectives from ancient Ayurvedic medical science, space medicine, and modern clinical nutrition.

    Kulkarni, Anil D; Sundaresan, Alamelu; Rashid, Muhammad J; Yamamoto, Shigeru; Karkow, Francisco

    2014-01-01

    The principal objective of this paper is to demonstrate the role of taste and flavor in health from the ancient science of Ayurveda to modern medicine; specifically their mechanisms and roles in space medicine and their clinical relevance in modern heath care. It also describes the brief history of the use of the monosodium glutamate or flavor enhancers ("Umami substance") that improve the quality of food intake by stimulating chemosensory perception. In addition, the dietary nucleotides are known to be the components of "Umami substance" and the benefit of their use has been proposed in various types of patients with cancer, radiation therapy, organ transplantation, and for application in space medicine.

  13. Algebraic theory of locally nilpotent derivations

    Freudenburg, Gene

    2017-01-01

    This book explores the theory and application of locally nilpotent derivations, a subject motivated by questions in affine algebraic geometry and having fundamental connections to areas such as commutative algebra, representation theory, Lie algebras and differential equations. The author provides a unified treatment of the subject, beginning with 16 First Principles on which the theory is based. These are used to establish classical results, such as Rentschler's Theorem for the plane and the Cancellation Theorem for Curves. More recent results, such as Makar-Limanov's theorem for locally nilpotent derivations of polynomial rings, are also discussed. Topics of special interest include progress in classifying additive actions on three-dimensional affine space, finiteness questions (Hilbert's 14th Problem), algorithms, the Makar-Limanov invariant, and connections to the Cancellation Problem and the Embedding Problem. A lot of new material is included in this expanded second edition, such as canonical factoriza...

  14. Parameter and State Estimator for State Space Models

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  15. New Insights into the RLS Algorithm

    Gänsler Tomas

    2004-01-01

    Full Text Available The recursive least squares (RLS algorithm is one of the most popular adaptive algorithms that can be found in the literature, due to the fact that it is easily and exactly derived from the normal equations. In this paper, we give another interpretation of the RLS algorithm and show the importance of linear interpolation error energies in the RLS structure. We also give a very efficient way to recursively estimate the condition number of the input signal covariance matrix thanks to fast versions of the RLS algorithm. Finally, we quantify the misalignment of the RLS algorithm with respect to the condition number.

  16. Algorithmic Verification of Linearizability for Ordinary Differential Equations

    Lyakhov, Dmitry A.; Gerdt, Vladimir P.; Michels, Dominik L.

    2017-01-01

    one by a point transformation of the dependent and independent variables. The first algorithm is based on a construction of the Lie point symmetry algebra and on the computation of its derived algebra. The second algorithm exploits the differential

  17. A Scalable Local Algorithm for Distributed Multivariate Regression

    National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm can be used for distributed...

  18. An Efficient Local Algorithm for Distributed Multivariate Regression

    National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm is designed for distributed...

  19. The architecture of a video image processor for the space station

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  20. A synthesis/design optimization algorithm for Rankine cycle based energy systems

    Toffolo, Andrea

    2014-01-01

    The algorithm presented in this work has been developed to search for the optimal topology and design parameters of a set of Rankine cycles forming an energy system that absorbs/releases heat at different temperature levels and converts part of the absorbed heat into electricity. This algorithm can deal with several applications in the field of energy engineering: e.g., steam cycles or bottoming cycles in combined/cogenerative plants, steam networks, low temperature organic Rankine cycles. The main purpose of this algorithm is to overcome the limitations of the search space introduced by the traditional mixed-integer programming techniques, which assume that possible solutions are derived from a single superstructure embedding them all. The algorithm presented in this work is a hybrid evolutionary/traditional optimization algorithm organized in two levels. A complex original codification of the topology and the intensive design parameters of the system is managed by the upper level evolutionary algorithm according to the criteria set by the HEATSEP method, which are used for the first time to automatically synthesize a “basic” system configuration from a set of elementary thermodynamic cycles. The lower SQP (sequential quadratic programming) algorithm optimizes the objective function(s) with respect to cycle mass flow rates only, taking into account the heat transfer feasibility constraint within the undefined heat transfer section. A challenging example of application is also presented to show the capabilities of the algorithm. - Highlights: • Energy systems based on Rankine cycles are used in many applications. • A hybrid algorithm is proposed to optimize the synthesis/design of such systems. • The topology of the candidate solutions is not limited by a superstructure. • Topology is managed by the genetic operators of the upper level algorithm. • The effectiveness of the algorithm is proved in a complex test case

  1. Design study of RL10 derivatives. Volume 3, part 2: Operational and flight support plan. [analysis of transportation requirements for rocket engine in support of space tug program

    Shubert, W. C.

    1973-01-01

    Transportation requirements are considered during the engine design layout reviews and maintenance engineering analyses. Where designs cannot be influenced to avoid transportation problems, the transportation representative is advised of the problems permitting remedies early in the program. The transportation representative will monitor and be involved in the shipment of development engine and GSE hardware between FRDC and vehicle manufacturing plant and thereby will be provided an early evaluation of the transportation plans, methods and procedures to be used in the space tug support program. Unanticipated problems discovered in the shipment of development hardware will be known early enough to permit changes in packaging designs and transportation plans before the start of production hardware and engine shipments. All conventional transport media can be used for the movement of space tug engines. However, truck transport is recommended for ready availability, variety of routes, short transit time, and low cost.

  2. Modified Projection Algorithms for Solving the Split Equality Problems

    Qiao-Li Dong

    2014-01-01

    proposed a CQ algorithm for solving it. In this paper, we propose a modification for the CQ algorithm, which computes the stepsize adaptively and performs an additional projection step onto two half-spaces in each iteration. We further propose a relaxation scheme for the self-adaptive projection algorithm by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Weak convergence results for both algorithms are analyzed.

  3. A new chaotic algorithm for image encryption

    Gao Haojiang; Zhang Yisheng; Liang Shuyun; Li Dequn

    2006-01-01

    Recent researches of image encryption algorithms have been increasingly based on chaotic systems, but the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. This paper presents a new nonlinear chaotic algorithm (NCA) which uses power function and tangent function instead of linear function. Its structural parameters are obtained by experimental analysis. And an image encryption algorithm in a one-time-one-password system is designed. The experimental results demonstrate that the image encryption algorithm based on NCA shows advantages of large key space and high-level security, while maintaining acceptable efficiency. Compared with some general encryption algorithms such as DES, the encryption algorithm is more secure

  4. Large scale tracking algorithms

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  6. The theory of hybrid stochastic algorithms

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  7. Testing algorithms for critical slowing down

    Cossu Guido

    2018-01-01

    Full Text Available We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  8. Study of the convergence behavior of the complex kernel least mean square algorithm.

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  9. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  10. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  11. Evolving temporal association rules with genetic algorithms

    Matthews, Stephen G.; Gongora, Mario A.; Hopgood, Adrian A.

    2010-01-01

    A novel framework for mining temporal association rules by discovering itemsets with a genetic algorithm is introduced. Metaheuristics have been applied to association rule mining, we show the efficacy of extending this to another variant - temporal association rule mining. Our framework is an enhancement to existing temporal association rule mining methods as it employs a genetic algorithm to simultaneously search the rule space and temporal space. A methodology for validating the ability of...

  12. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-01-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications

  13. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    Chatterjee, Kausik, E-mail: kausik.chatterjee@aggiemail.usu.edu [Strategic and Military Space Division, Space Dynamics Laboratory, North Logan, UT 84341 (United States); Center for Atmospheric and Space Sciences, Utah State University, Logan, UT 84322 (United States); Roadcap, John R., E-mail: john.roadcap@us.af.mil [Air Force Research Laboratory, Kirtland AFB, NM 87117 (United States); Singh, Surendra, E-mail: surendra-singh@utulsa.edu [Department of Electrical Engineering, The University of Tulsa, Tulsa, OK 74104 (United States)

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  14. Absence of a space-charge-derived enhancement of ionic conductivity in β|γ- heterostructured 7H- and 9R-AgI

    Morgan, B J; Madden, P A

    2012-01-01

    Extreme room temperature conductivity enhancements have been reported for nanocrystalline AgI of up to × 10 4 relative to bulk β-AgI (Guo et al 2005 Adv. Mater. 17 2815-9). These samples were identified as possessing 7H and 9R polytype structures, which can be considered as heterostructures composed of thin, commensurate layers in the β (wurtzite) and γ (zincblende) phases. It has been proposed that space-charge layer formation at β|γ-interfaces causes near complete disordering of the Ag + sublattice in these polytypes, resulting in a massive intrinsic enhancement of ionic conductivity. We have performed molecular dynamics simulations of β- and γ-AgI and mixed β|γ superlattices, to study the effect of heterostructuring on intrinsic defect populations and Ag + transport. The ionic conductivities and Ag + diffusion coefficients vary as β > 7H ≈ 9R ≈ 10L > γ. The β|γ-heterostructured polytypes show no enhancement in defect populations or Ag + mobilities relative to the β-AgI phase, and instead behave as simple composites of β- and γ-AgI. This contradicts the proposal that the extreme conductivity enhancement observed for 7H and 9R polytypes is explained by extensive space-charge formation. (paper)

  15. The Vehicular Information Space Framework

    Prinz, Vivian; Schlichter, Johann; Schweiger, Benno

    Vehicular networks are distributed, self-organizing and highly mobile ad hoc networks. They allow for providing drivers with up-to-the-minute information about their environment. Therefore, they are expected to be a decisive future enabler for enhancing driving comfort and safety. This article introduces the Vehicular Information Space framework (VIS). Vehicles running the VIS form a kind of distributed database. It enables them to provide information like existing hazards, parking spaces or traffic densities in a location aware and fully distributed manner. In addition, vehicles can retrieve, modify and delete these information items. The underlying algorithm is based on features derived from existing structured Peer-to-Peer algorithms and extended to suit the specific characteristics of highly mobile ad hoc networks. We present, implement and simulate the VIS using a motorway and an urban traffic environment. Simulation studies on VIS message occurrence show that the VIS implies reasonable traffic overhead. Also, overall VIS message traffic is independent from the number of information items provided.

  16. A Novel Algorithm of Surface Eliminating in Undersurface Optoacoustic Imaging

    Zhulina Yulia V

    2004-01-01

    Full Text Available This paper analyzes the task of optoacoustic imaging of the objects located under the surface covering them. In this paper, we suggest the algorithm of the surface eliminating based on the fact that the intensity of the image as a function of the spatial point should change slowly inside the local objects, and will suffer a discontinuity of the spatial gradients on their boundaries. The algorithm forms the 2-dimensional curves along which the discontinuity of the signal derivatives is detected. Then, the algorithm divides the signal space into the areas along these curves. The signals inside the areas with the maximum level of the signal amplitudes and the maximal gradient absolute values on their edges are put equal to zero. The rest of the signals are used for the image restoration. This method permits to reconstruct the picture of the surface boundaries with a higher contrast than that of the surface detection technique based on the maximums of the received signals. This algorithm does not require any prior knowledge of the signals' statistics inside and outside the local objects. It may be used for reconstructing any images with the help of the signals representing the integral over the object's volume. Simulation and real data are also provided to validate the proposed method.

  17. Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates

    National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...

  18. In-Place Algorithms for Computing (Layers of) Maxima

    Blunck, Henrik; Vahrenhold, Jan

    2006-01-01

    We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log2 n) time and require O(1) space in addition to the representation of the input.......We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log2 n) time and require O(1) space in addition to the representation of the input....

  19. Algorithmic causets

    Bolognesi, Tommaso

    2011-01-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  20. Quantum algorithm for support matrix machines

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  1. Trajectory Planning with Pose Feedback for a Dual-Arm Space Robot

    Yicheng Liu

    2016-01-01

    Full Text Available In order to obtain high precision path tracking for a dual-arm space robot, a trajectory planning method with pose feedback is proposed to be introduced into the design process in this paper. Firstly, pose error kinematic models are derived from the related kinematics and desired pose command for the end-effector and the base, respectively. On this basis, trajectory planning with pose feedback is proposed from a control perspective. Theoretical analyses show that the proposed trajectory planning algorithm can guarantee that pose error converges to zero exponentially for both the end-effector and the base when the robot is out of singular configuration. Compared with the existing algorithms, the proposed algorithm can lead to higher precision path tracking for the end-effector. Furthermore, the algorithm renders the system good anti-interference property for the base. Simulation results demonstrate the effectiveness of the proposed trajectory planning algorithm.

  2. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm

    Joon Heo

    2009-06-01

    Full Text Available Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  3. Highlights of TOMS Version 9 Total Ozone Algorithm

    Bhartia, Pawan; Haffner, David

    2012-01-01

    The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side

  4. Improved identification of the solution space of aerosol microphysical properties derived from the inversion of profiles of lidar optical data, part 1: theory.

    Kolgotin, Alexei; Müller, Detlef; Chemyakin, Eduard; Romanov, Anton

    2016-12-01

    Multiwavelength Raman/high spectral resolution lidars that measure backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm can be used for the retrieval of particle microphysical parameters, such as effective and mean radius, number, surface-area and volume concentrations, and complex refractive index, from inversion algorithms. In this study, we carry out a correlation analysis in order to investigate the degree of dependence that may exist between the optical data taken with lidar and the underlying microphysical parameters. We also investigate if the correlation properties identified in our study can be used as a priori or a posteriori constraints for our inversion scheme so that the inversion results can be improved. We made the simplifying assumption of error-free optical data in order to find out what correlations exist in the best case situation. Clearly, for practical applications, erroneous data need to be considered too. On the basis of simulations with synthetic optical data, we find the following results, which hold true for arbitrary particle size distributions, i.e., regardless of the modality or the shape of the size distribution function: surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99. We also find a correlation coefficient above 0.99 for the extinction coefficient versus (1) the ratio of the volume concentration to effective radius and (2) the product of the number concentration times the sum of the squares of the mean radius and standard deviation of the investigated particle size distributions. Besides that, we find that for particles of any mode fraction of the particle size distribution, the complex refractive index is uniquely defined by extinction- and backscatter-related Ångström exponents, lidar ratios at two wavelengths, and an effective radius.

  5. Stability and chaos of LMSER PCA learning algorithm

    Lv Jiancheng; Y, Zhang

    2007-01-01

    LMSER PCA algorithm is a principal components analysis algorithm. It is used to extract principal components on-line from input data. The algorithm has both stability and chaotic dynamic behavior under some conditions. This paper studies the local stability of the LMSER PCA algorithm via a corresponding deterministic discrete time system. Conditions for local stability are derived. The paper also explores the chaotic behavior of this algorithm. It shows that the LMSER PCA algorithm can produce chaos. Waveform plots, Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior of this algorithm

  6. Iterative algorithms for the input and state recovery from the approximate inverse of strictly proper multivariable systems

    Chen, Liwen; Xu, Qiang

    2018-02-01

    This paper proposes new iterative algorithms for the unknown input and state recovery from the system outputs using an approximate inverse of the strictly proper linear time-invariant (LTI) multivariable system. One of the unique advantages from previous system inverse algorithms is that the output differentiation is not required. The approximate system inverse is stable due to the systematic optimal design of a dummy feedthrough D matrix in the state-space model via the feedback stabilization. The optimal design procedure avoids trial and error to identify such a D matrix which saves tremendous amount of efforts. From the derived and proved convergence criteria, such an optimal D matrix also guarantees the convergence of algorithms. Illustrative examples show significant improvement of the reference input signal tracking by the algorithms and optimal D design over non-iterative counterparts on controllable or stabilizable LTI systems, respectively. Case studies of two Boeing-767 aircraft aerodynamic models further demonstrate the capability of the proposed methods.

  7. Algorithm for counting large directed loops

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  8. Quantum learning algorithms for quantum measurements

    Bisio, Alessandro, E-mail: alessandro.bisio@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); D' Ariano, Giacomo Mauro, E-mail: dariano@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Perinotti, Paolo, E-mail: paolo.perinotti@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Sedlak, Michal, E-mail: michal.sedlak@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)

    2011-09-12

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  9. Decoding algorithm for vortex communications receiver

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  10. Quantum learning algorithms for quantum measurements

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Sedlak, Michal

    2011-01-01

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  11. Hydraulic Properties of Closely Spaced Dipping Open Fractures Intersecting a Fluid-Filled Borehole Derived From Tube Wave Generation and Scattering

    Minato, Shohei; Ghose, Ranajit; Tsuji, Takeshi; Ikeda, Michiharu; Onishi, Kozo

    2017-10-01

    Fluid-filled fractures and fissures often determine the pathways and volume of fluid movement. They are critically important in crustal seismology and in the exploration of geothermal and hydrocarbon reservoirs. We introduce a model for tube wave scattering and generation at dipping, parallel-wall fractures intersecting a fluid-filled borehole. A new equation reveals the interaction of tube wavefield with multiple, closely spaced fractures, showing that the fracture dip significantly affects the tube waves. Numerical modeling demonstrates the possibility of imaging these fractures using a focusing analysis. The focused traces correspond well with the known fracture density, aperture, and dip angles. Testing the method on a VSP data set obtained at a fault-damaged zone in the Median Tectonic Line, Japan, presents evidences of tube waves being generated and scattered at open fractures and thin cataclasite layers. This finding leads to a new possibility for imaging, characterizing, and monitoring in situ hydraulic properties of dipping fractures using the tube wavefield.

  12. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  13. Strong and Weak Convergence Criteria of Composite Iterative Algorithms for Systems of Generalized Equilibria

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We first introduce and analyze one iterative algorithm by using the composite shrinking projection method for finding a solution of the system of generalized equilibria with constraints of several problems: a generalized mixed equilibrium problem, finitely many variational inequalities, and the common fixed point problem of an asymptotically strict pseudocontractive mapping in the intermediate sense and infinitely many nonexpansive mappings in a real Hilbert space. We prove a strong convergence theorem for the iterative algorithm under suitable conditions. On the other hand, we also propose another iterative algorithm involving no shrinking projection method and derive its weak convergence under mild assumptions. Our results improve and extend the corresponding results in the earlier and recent literature.

  14. Simulation of quantum systems with random walks: A new algorithm for charged systems

    Ceperley, D.

    1983-01-01

    Random walks with branching have been used to calculate exact properties of the ground state of quantum many-body systems. In this paper, a more general Green's function identity is derived which relates the potential energy, a trial wavefunction, and a trial density matrix to the rules of a branched random walk. It is shown that an efficient algorithm requires a good trial wavefunction, a good trial density matrix, and a good sampling of this density matrix. An accurate density matrix is constructed for Coulomb systems using the path integral formula. The random walks from this new algorithm diffuse through phase space an order of magnitude faster than the previous Green's Function Monte Carlo method. In contrast to the simple diffusion Monte Carlo algorithm, it is exact method. Representative results are presented for several molecules

  15. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  16. Implementation schemes in NMR of quantum processors and the Deutsch-Jozsa algorithm by using virtual spin representation

    Kessel, Alexander R.; Yakovleva, Natalia M.

    2002-01-01

    Schemes of experimental realization of the main two-qubit processors for quantum computers and the Deutsch-Jozsa algorithm are derived in virtual spin representation. The results are applicable for every four quantum states allowing the required properties for quantum processor implementation if for qubit encoding, virtual spin representation is used. A four-dimensional Hilbert space of nuclear spin 3/2 is considered in detail for this aim

  17. Empirical study of parallel LRU simulation algorithms

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  18. Improved k-t PCA Algorithm Using Artificial Sparsity in Dynamic MRI.

    Wang, Yiran; Chen, Zhifeng; Wang, Jing; Yuan, Lixia; Xia, Ling; Liu, Feng

    2017-01-01

    The k - t principal component analysis ( k - t PCA) is an effective approach for high spatiotemporal resolution dynamic magnetic resonance (MR) imaging. However, it suffers from larger residual aliasing artifacts and noise amplification when the reduction factor goes higher. To further enhance the performance of this technique, we propose a new method called sparse k - t PCA that combines the k - t PCA algorithm with an artificial sparsity constraint. It is a self-calibrated procedure that is based on the traditional k - t PCA method by further eliminating the reconstruction error derived from complex subtraction of the sampled k - t space from the original reconstructed k - t space. The proposed method is tested through both simulations and in vivo datasets with different reduction factors. Compared to the standard k - t PCA algorithm, the sparse k - t PCA can improve the normalized root-mean-square error performance and the accuracy of temporal resolution. It is thus useful for rapid dynamic MR imaging.

  19. Pseudo-deterministic Algorithms

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  20. Algorithmic Relative Complexity

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  1. Algorithms for Protein Structure Prediction

    Paluszewski, Martin

    -trace. Here we present three different approaches for reconstruction of C-traces from predictable measures. In our first approach [63, 62], the C-trace is positioned on a lattice and a tabu-search algorithm is applied to find minimum energy structures. The energy function is based on half-sphere-exposure (HSE......) is more robust than standard Monte Carlo search. In the second approach for reconstruction of C-traces, an exact branch and bound algorithm has been developed [67, 65]. The model is discrete and makes use of secondary structure predictions, HSE, CN and radius of gyration. We show how to compute good lower...... bounds for partial structures very fast. Using these lower bounds, we are able to find global minimum structures in a huge conformational space in reasonable time. We show that many of these global minimum structures are of good quality compared to the native structure. Our branch and bound algorithm...

  2. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    Bang, Youngsuk

    hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  3. Seismic noise attenuation using an online subspace tracking algorithm

    Zhou, Yatong; Li, Shuhua; Zhang, D.; Chen, Yangkang

    2018-01-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient

  4. The Viterbi Algorithm expressed in Constraint Handling Rules

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    The Viterbi algorithm is a classical example of a dynamic programming algorithm, in which pruning reduces the search space drastically, so that an otherwise exponential time complexity is reduced to linearity. The central steps of the algorithm, expansion and pruning, can be expressed in a concis...

  5. Classification algorithms using adaptive partitioning

    Binev, Peter; Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald

    2014-01-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  6. Classification algorithms using adaptive partitioning

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  7. A contrast-oriented algorithm for FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer: derivation from phantom measurements and validation in patient data

    Schaefer, Andrea; Hellwig, Dirk; Kirsch, Carl-Martin; Nestle, Ursula [Saarland University Medical Center, Department of Nuclear Medicine, Homburg (Germany); Kremp, Stephanie; Ruebe, Christian [Saarland University Medical Center, Department of Radiotherapy, Homburg (Germany)

    2008-11-15

    An easily applicable algorithm for the FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer was developed by phantom measurements and validated in patient data. PET scans were performed (ECAT-ART tomograph) on two cylindrical phantoms (phan1, phan2) containing glass spheres of different volumes (7.4-258 ml) which were filled with identical FDG concentrations. Gradually increasing the activity of the fillable background, signal-to-background ratios from 33:1 to 2.5:1 were realised. The mean standardised uptake value (SUV) of the region-of-interest (ROI) surrounded by a 70% isocontour (mSUV{sub 70}) was used to represent the FDG accumulation of each sphere (or tumour). Image contrast was defined as: C=(mSUV{sub 70}-BG)/BG wehre BG is the mean background - SUV. For the spheres of phan1, the threshold SUVs (TS) best matching the known sphere volumes were determined. A regression function representing the relationship between TS/(mSUV{sub 70}-BG) and C was calculated and used for delineation of the spheres in phan2 and the gross tumour volumes (GTVs) of eight primary lung tumours. These GTVs were compared to those defined using CT. The relationship between TS/(mSUV{sub 70}-BG) and C is best described by an inverse regression function which can be converted to the linear relationship TS=a x mSUV{sub 70}+b x BG. Using this algorithm, the volumes delineated in phan2 differed by only -0.4 to +0.7 mm in radius from the true ones, whilst the PET-GTVs differed by only -0.7 to +1.2 mm compared with the values determined by CT. By the contrast-oriented algorithm presented in this study, a PET-based delineation of GTVs for primary tumours of lung cancer patients is feasible. (orig.)

  8. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  9. A note on the linear memory Baum-Welch algorithm

    Jensen, Jens Ledet

    2009-01-01

    We demonstrate the simplicity and generality of the recently introduced linear space Baum-Welch algorithm for hidden Markov models. We also point to previous literature on the subject.......We demonstrate the simplicity and generality of the recently introduced linear space Baum-Welch algorithm for hidden Markov models. We also point to previous literature on the subject....

  10. The Chandra Source Catalog: Algorithms

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  11. Hamiltonian Algorithm Sound Synthesis

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  12. Progressive geometric algorithms

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  13. Progressive geometric algorithms

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  14. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  15. Algoritmo para generar formulas de características geométricas de las secciones planas, su implementación en DERIVE // Algorithm for the calculation of the geometric characteristics of the plane sections defined for polygonal, their implementation in DERI

    Rolando Rivero-Galán

    2010-01-01

    Full Text Available ResumenEn la esfera del diseno de estructuras y de elementos de maquinas, se presenta con relativafrecuencia el calculo de determinadas caracteristicas o propiedades geometricas de seccionesplanas, como son entre otras: el area de una seccion transversal, el centro de gravedad, unmomento de inercia o mas general la determinacion de alguna caracteristica geometrica definidapor una integral doble extendida en la region del plano que ocupa la seccion, pieza o elemento.El presente trabajo tiene como objetivo la confeccion de un programa para computadora,utilizando el asistente matematico DERIVE, para la determinacion de las caracteristicas geometricasde secciones planas cuyo contorno este constituido por segmentos de rectas.Palabras claves: Algoritmo, seccion plana, DERIVE, momentos, centro de gravedad.______________________________________________________________AbstractIn the sphere of the design of structures, is relative frequency the calculation of certaincharacteristic or geometric properties of plane sections, like: the area of a traverse section, thecenter of gravity, a moment of inertia or more general the determination of some geometriccharacteristic defined by a double integral extended in the region of the plane that occupies thesection, piece or element.The present work has as objective the making of a program for computer, using the mathematicalassistant DERIVE, for the determination of the geometric characteristics of plane sections whosecontour this constituted by segments of right.Key words: Algorithm, plane section, DERIVE, moments, center of gravity

  16. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  17. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  18. Algorithmically specialized parallel computers

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  19. Generalized phase retrieval algorithm based on information measures

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  20. An Algorithm for the Convolution of Legendre Series

    Hale, Nicholas; Townsend, Alex

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  1. Global Derivatives

    Andersen, Torben Juul

    approaches to dealing in the global business environment." - Sharon Brown-Hruska, Commissioner, Commodity Futures Trading Commission, USA. "This comprehensive survey of modern risk management using derivative securities is a fine demonstration of the practical relevance of modern derivatives theory to risk......" provides comprehensive coverage of different types of derivatives, including exchange traded contracts and over-the-counter instruments as well as real options. There is an equal emphasis on the practical application of derivatives and their actual uses in business transactions and corporate risk...... management situations. Its key features include: derivatives are introduced in a global market perspective; describes major derivative pricing models for practical use, extending these principles to valuation of real options; practical applications of derivative instruments are richly illustrated...

  2. Estimating the size of the solution space of metabolic networks

    Mulet Roberto

    2008-05-01

    Full Text Available Abstract Background Cellular metabolism is one of the most investigated system of biological interactions. While the topological nature of individual reactions and pathways in the network is quite well understood there is still a lack of comprehension regarding the global functional behavior of the system. In the last few years flux-balance analysis (FBA has been the most successful and widely used technique for studying metabolism at system level. This method strongly relies on the hypothesis that the organism maximizes an objective function. However only under very specific biological conditions (e.g. maximization of biomass for E. coli in reach nutrient medium the cell seems to obey such optimization law. A more refined analysis not assuming extremization remains an elusive task for large metabolic systems due to algorithmic limitations. Results In this work we propose a novel algorithmic strategy that provides an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. Using a technique derived from the fields of statistical physics and information theory we designed a message-passing algorithm to estimate the size of the affine space containing all possible steady-state flux distributions of metabolic networks. The algorithm, based on the well known Bethe approximation, can be used to approximately compute the volume of a non full-dimensional convex polytope in high dimensions. We first compare the accuracy of the predictions with an exact algorithm on small random metabolic networks. We also verify that the predictions of the algorithm match closely those of Monte Carlo based methods in the case of the Red Blood Cell metabolic network. Then we test the effect of gene knock-outs on the size of the solution space in the case of E. coli central metabolism. Finally we analyze the statistical properties of the average fluxes of the reactions in the E. coli metabolic network. Conclusion We propose a

  3. Assessing semantic similarity of texts - Methods and algorithms

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  4. Quantum computers in phase space

    Miquel, Cesar; Paz, Juan Pablo; Saraceno, Marcos

    2002-01-01

    We represent both the states and the evolution of a quantum computer in phase space using the discrete Wigner function. We study properties of the phase space representation of quantum algorithms: apart from analyzing important examples, such as the Fourier transform and Grover's search, we examine the conditions for the existence of a direct correspondence between quantum and classical evolutions in phase space. Finally, we describe how to measure directly the Wigner function in a given phase-space point by means of a tomographic method that, itself, can be interpreted as a simple quantum algorithm

  5. Higher derivatives in gauge transformations

    Gogilidze, S.A.; Sanadze, V.V.; Tkebuchava, F.G.

    1992-01-01

    The mechanism of appearance of highher derivatives of coordinates in the symmetry transformation law of the second Noether's theorem is established. It is shown that the corresponding transformations are canonical in the extended phase space. 15 refs

  6. The Diagnosis of Urinary Tract infection in Young children (DUTY): a diagnostic prospective observational study to derive and validate a clinical algorithm for the diagnosis of urinary tract infection in children presenting to primary care with an acute illness.

    Hay, Alastair D; Birnie, Kate; Busby, John; Delaney, Brendan; Downing, Harriet; Dudley, Jan; Durbaba, Stevo; Fletcher, Margaret; Harman, Kim; Hollingworth, William; Hood, Kerenza; Howe, Robin; Lawton, Michael; Lisles, Catherine; Little, Paul; MacGowan, Alasdair; O'Brien, Kathryn; Pickles, Timothy; Rumsby, Kate; Sterne, Jonathan Ac; Thomas-Jones, Emma; van der Voort, Judith; Waldron, Cherry-Ann; Whiting, Penny; Wootton, Mandy; Butler, Christopher C

    2016-07-01

    It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. Multicentre, prospective diagnostic cohort study. Children UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. A total of 7163 children were recruited, of whom 50% were female and 49% were children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick

  7. SAW Classification Algorithm for Chinese Text Classification

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  8. Algorithms and file structures for computational geometry

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  9. Collective probabilities algorithm for surface hopping calculations

    Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto

    2003-01-01

    General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method

  10. Financial Derivatives

    Wigan, Duncan

    2013-01-01

    Contemporary derivatives mark the development of capital and constitute a novel form of ownership. By reconfiguring the temporal, spatial and legal character of ownership derivatives present a substantive challenge to the tax collecting state. While fiscal systems are nationally bounded...... and inherently static, capital itself is unprecedentedly mobile, fluid and fungible. As such derivatives raise the specter of ‘financial weapons of mass destruction’....

  11. Financial Derivatives

    Janečková, Alena

    2011-01-01

    1 Abstract/ Financial derivatives The purpose of this thesis is to provide an introduction to financial derivatives which has been, from the legal perspective, described in a not satisfactory manner as quite little literature that can be found about this topic. The main objectives of this thesis are to define the term "financial derivatives" and its particular types and to analyse legal nature of these financial instruments. The last objective is to try to draft future law regulation of finan...

  12. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  13. Quantum Computation and Algorithms

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  14. Fast algorithms for transport models. Final report

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  15. A Space-Mapping Framework for Engineering Optimization: Theory and Implementation

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    a region of interest. Output space mapping ensures the matching of responses and first-order derivatives between the mapped coarse model and the fine model at the current iteration point in the optimization process. We provide theoretical results that show the importance of the explicit use of sensitivity...... information to the convergence properties of our family of algorithms. Our algorithm is demonstrated on the optimization of a microstrip band-pass filter, a band-pass filter with double-coupled resonators and a seven-section impedance transformer. We describe the novel user-oriented software package SMF...

  16. Queue and stack sorting algorithm optimization and performance analysis

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  17. Gems of combinatorial optimization and graph algorithms

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  18. A combinational fast algorithm for image reconstruction

    Wu Zhongquan

    1987-01-01

    A combinational fast algorithm has been developed in order to increase the speed of reconstruction. First, an interpolation method based on B-spline functions is used in image reconstruction. Next, the influence of the boundary conditions assumed here on the interpolation of filtered projections and on the image reconstruction is discussed. It is shown that this boundary condition has almost no influence on the image in the central region of the image space, because the error of interpolation rapidly decreases by a factor of ten in shifting two pixels from the edge toward the center. In addition, a fast algorithm for computing the detecting angle has been used with the mentioned interpolation algorithm, and the cost for detecting angle computaton is reduced by a factor of two. The implementation results show that in the same subjective and objective fidelity, the computational cost for the interpolation using this algorithm is about one-twelfth of the conventional algorithm

  19. Learning theory of distributed spectral algorithms

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-01-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)

  20. Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm

    Burt, Adam O.; Hull, Patrick V.

    2014-01-01

    This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.