WorldWideScience

Sample records for surface integration algorithm

  1. Maritime over the Horizon Sensor Integration: High Frequency Surface-Wave-Radar and Automatic Identification System Data Integration Algorithm.

    Science.gov (United States)

    Nikolic, Dejan; Stojkovic, Nikola; Lekic, Nikola

    2018-04-09

    To obtain the complete operational picture of the maritime situation in the Exclusive Economic Zone (EEZ) which lies over the horizon (OTH) requires the integration of data obtained from various sensors. These sensors include: high frequency surface-wave-radar (HFSWR), satellite automatic identification system (SAIS) and land automatic identification system (LAIS). The algorithm proposed in this paper utilizes radar tracks obtained from the network of HFSWRs, which are already processed by a multi-target tracking algorithm and associates SAIS and LAIS data to the corresponding radar tracks, thus forming an integrated data pair. During the integration process, all HFSWR targets in the vicinity of AIS data are evaluated and the one which has the highest matching factor is used for data association. On the other hand, if there is multiple AIS data in the vicinity of a single HFSWR track, the algorithm still makes only one data pair which consists of AIS and HFSWR data with the highest mutual matching factor. During the design and testing, special attention is given to the latency of AIS data, which could be very high in the EEZs of developing countries. The algorithm is designed, implemented and tested in a real working environment. The testing environment is located in the Gulf of Guinea and includes a network of HFSWRs consisting of two HFSWRs, several coastal sites with LAIS receivers and SAIS data provided by provider of SAIS data.

  2. An integrated study of surface roughness in EDM process using regression analysis and GSO algorithm

    Science.gov (United States)

    Zainal, Nurezayana; Zain, Azlan Mohd; Sharif, Safian; Nuzly Abdull Hamed, Haza; Mohamad Yusuf, Suhaila

    2017-09-01

    The aim of this study is to develop an integrated study of surface roughness (Ra) in the die-sinking electrical discharge machining (EDM) process of Ti-6AL-4V titanium alloy with positive polarity of copper-tungsten (Cu-W) electrode. Regression analysis and glowworm swarm optimization (GSO) algorithm were considered for modelling and optimization process. Pulse on time (A), pulse off time (B), peak current (C) and servo voltage (D) were selected as the machining parameters with various levels. The experiments have been conducted based on the two levels of full factorial design with an added center point design of experiments (DOE). Moreover, mathematical models with linear and 2 factor interaction (2FI) effects of the parameters chosen were developed. The validity test of the fit and the adequacy of the developed mathematical models have been carried out by using analysis of variance (ANOVA) and F-test. The statistical analysis showed that the 2FI model outperformed with the most minimal value of Ra compared to the linear model and experimental result.

  3. A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver

    KAUST Repository

    Liu, Yang

    2015-10-26

    © 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT)-based surface integral equation (SIE) solvers, it reduces the computational and memory costs of transient analysis from equation and equation to equation and equation, respectively, where Nt and Ns denote the number of temporal and spatial unknowns (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). In the past, PWTD-accelerated MOT-SIE solvers have been applied to transient problems involving half million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). Recently, a scalable parallel PWTD-accelerated MOT-SIE solver that leverages a hiearchical parallelization strategy has been developed and successfully applied to the transient problems involving ten million spatial unknowns (Liu et. al., in URSI Digest, 2013). We further enhanced the capabilities of this solver by implementing a compression scheme based on local cosine wavelet bases (LCBs) that exploits the sparsity in the temporal dimension (Liu et. al., in URSI Digest, 2014). Specifically, the LCB compression scheme was used to reduce the memory requirement of the PWTD ray data and computational cost of operations in the PWTD translation stage.

  4. Pattern recognition of concrete surface cracks and defects using integrated image processing algorithms

    Science.gov (United States)

    Balbin, Jessie R.; Hortinela, Carlos C.; Garcia, Ramon G.; Baylon, Sunnycille; Ignacio, Alexander Joshua; Rivera, Marco Antonio; Sebastian, Jaimie

    2017-06-01

    Pattern recognition of concrete surface crack defects is very important in determining stability of structure like building, roads or bridges. Surface crack is one of the subjects in inspection, diagnosis, and maintenance as well as life prediction for the safety of the structures. Traditionally determining defects and cracks on concrete surfaces are done manually by inspection. Moreover, any internal defects on the concrete would require destructive testing for detection. The researchers created an automated surface crack detection for concrete using image processing techniques including Hough transform, LoG weighted, Dilation, Grayscale, Canny Edge Detection and Haar Wavelet Transform. An automatic surface crack detection robot is designed to capture the concrete surface by sectoring method. Surface crack classification was done with the use of Haar trained cascade object detector that uses both positive samples and negative samples which proved that it is possible to effectively identify the surface crack defects.

  5. Algorithm for Recovery of Integrated Water Vapor Content in the Atmosphere over Land Surfaces Based on Satellite Spectroradiometer Data

    Science.gov (United States)

    Lisenko, S. A.

    2017-05-01

    An algorithm is proposed for making charts of the distribution of water vapor in the atmosphere based on multispectral images of the earth by the Ocean and Land Color Instrument (OLCI) on board of the European research satellite Sentinel-3. The algorithm is based on multiple regression fits of the spectral brightness coefficients at the upper boundary of the atmosphere, the geometric parameters of the satellite location (solar and viewing angles), and the total water vapor content in the atmosphere. A regression equation is derived from experimental data on the variation in the optical characteristics of the atmosphere and underlying surface, together with Monte-Carlo calculations of the radiative transfer characteristics. The equation includes the brightness coefficients in the near IR channels of the OLCI for the absorption bands of water vapor and oxygen, as well as for the transparency windows of the atmosphere. Together these make it possible to eliminate the effect of the reflection spectrum of the underlying surface and air pressure on the accuracy of the measurements. The algorithm is tested using data from a prototype OLCI, the medium resolution imaging spectrometer (MERIS). A sample chart of the distribution of water vapor in the atmosphere over Eastern Europe is constructed without using subsatellite data and digital models of the surface relief. The water vapor contents in the atmosphere determined using MERIS images and data provided by earthbound measurements with the aerosol robotic network (AERONET) are compared with a mean square deviation of 1.24 kg/m2.

  6. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  7. Land Surface Temperature Retrieval from MODIS Data by Integrating Regression Models and the Genetic Algorithm in an Arid Region

    Directory of Open Access Journals (Sweden)

    Ji Zhou

    2014-06-01

    Full Text Available The land surface temperature (LST is one of the most important parameters of surface-atmosphere interactions. Methods for retrieving LSTs from satellite remote sensing data are beneficial for modeling hydrological, ecological, agricultural and meteorological processes on Earth’s surface. Many split-window (SW algorithms, which can be applied to satellite sensors with two adjacent thermal channels located in the atmospheric window between 10 μm and 12 μm, require auxiliary atmospheric parameters (e.g., water vapor content. In this research, the Heihe River basin, which is one of the most arid regions in China, is selected as the study area. The Moderate-resolution Imaging Spectroradiometer (MODIS is selected as a test case. The Global Data Assimilation System (GDAS atmospheric profiles of the study area are used to generate the training dataset through radiative transfer simulation. Significant correlations between the atmospheric upwelling radiance in MODIS channel 31 and the other three atmospheric parameters, including the transmittance in channel 31 and the transmittance and upwelling radiance in channel 32, are trained based on the simulation dataset and formulated with three regression models. Next, the genetic algorithm is used to estimate the LST. Validations of the RM-GA method are based on the simulation dataset generated from in situ measured radiosonde profiles and GDAS atmospheric profiles, the in situ measured LSTs, and a pair of daytime and nighttime MOD11A1 products in the study area. The results demonstrate that RM-GA has a good ability to estimate the LSTs directly from the MODIS data without any auxiliary atmospheric parameters. Although this research is for local application in the Heihe River basin, the findings and proposed method can easily be extended to other satellite sensors and regions with arid climates and high elevations.

  8. The TROPOMI surface UV algorithm

    Science.gov (United States)

    Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna

    2018-02-01

    The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  9. The TROPOMI surface UV algorithm

    Directory of Open Access Journals (Sweden)

    A. V. Lindfors

    2018-02-01

    Full Text Available The TROPOspheric Monitoring Instrument (TROPOMI is the only payload of the Sentinel-5 Precursor (S5P, which is a polar-orbiting satellite mission of the European Space Agency (ESA. TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i daily dose or daily accumulated irradiance, (ii overpass dose rate or irradiance, and (iii local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2 satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  10. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  11. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  12. New algorithms for one-loop integrals

    International Nuclear Information System (INIS)

    Oldenborgh, G.J. van; Vermaseren, J.A.M.

    1989-01-01

    New algorithms are presented for evaluating the scalar one loop integrals for three- and four-point functions for arbitrary masses and external momenta. These formulations are useful both for analytic integration and for numerical evaluation in a computer program. The expressions are very compact and provide for an easy isolation of asymptotic behaviour and potential numerical problems. The tensor integrals have also been rewritten according to new algorithms, making it very easy to express amplitudes in terms of scalar-loop integrals. (author). 2 figs.; 133 schems

  13. Surface-Heating Algorithm for Water at Nanoscale.

    Science.gov (United States)

    Y D, Sumith; Maroo, Shalabh C

    2015-09-17

    A novel surface-heating algorithm for water is developed for molecular dynamics simulations. The validated algorithm can simulate the transient behavior of the evaporation of water when heated from a surface, which has been lacking in the literature. In this work, the algorithm is used to study the evaporation of water droplets on a platinum surface at different temperatures. The resulting contact angles of the droplets are compared to existing theoretical, numerical, and experimental studies. The evaporation profile along the droplet's radius and height is deduced along with the temperature gradient within the drop, and the evaporation behavior conforms to the Kelvin-Clapeyron theory. The algorithm captures the realistic differential thermal gradient in water heated at the surface and is promising for studying various heating/cooling problems, such as thin film evaporation, Leidenfrost effect, and so forth. The simplicity of the algorithm allows it to be easily extended to other surfaces and integrated into various molecular simulation software and user codes.

  14. Discrete Spectrum Reconstruction Using Integral Approximation Algorithm.

    Science.gov (United States)

    Sizikov, Valery; Sidorov, Denis

    2017-07-01

    An inverse problem in spectroscopy is considered. The objective is to restore the discrete spectrum from observed spectrum data, taking into account the spectrometer's line spread function. The problem is reduced to solution of a system of linear-nonlinear equations (SLNE) with respect to intensities and frequencies of the discrete spectral lines. The SLNE is linear with respect to lines' intensities and nonlinear with respect to the lines' frequencies. The integral approximation algorithm is proposed for the solution of this SLNE. The algorithm combines solution of linear integral equations with solution of a system of linear algebraic equations and avoids nonlinear equations. Numerical examples of the application of the technique, both to synthetic and experimental spectra, demonstrate the efficacy of the proposed approach in enabling an effective enhancement of the spectrometer's resolution.

  15. A Source Identification Algorithm for INTEGRAL

    Science.gov (United States)

    Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.

    2008-12-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.

  16. A Novel Algorithm of Surface Eliminating in Undersurface Optoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Zhulina Yulia V

    2004-01-01

    Full Text Available This paper analyzes the task of optoacoustic imaging of the objects located under the surface covering them. In this paper, we suggest the algorithm of the surface eliminating based on the fact that the intensity of the image as a function of the spatial point should change slowly inside the local objects, and will suffer a discontinuity of the spatial gradients on their boundaries. The algorithm forms the 2-dimensional curves along which the discontinuity of the signal derivatives is detected. Then, the algorithm divides the signal space into the areas along these curves. The signals inside the areas with the maximum level of the signal amplitudes and the maximal gradient absolute values on their edges are put equal to zero. The rest of the signals are used for the image restoration. This method permits to reconstruct the picture of the surface boundaries with a higher contrast than that of the surface detection technique based on the maximums of the received signals. This algorithm does not require any prior knowledge of the signals' statistics inside and outside the local objects. It may be used for reconstructing any images with the help of the signals representing the integral over the object's volume. Simulation and real data are also provided to validate the proposed method.

  17. ISINA: INTEGRAL Source Identification Network Algorithm

    Science.gov (United States)

    Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.

    2008-11-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk

  18. Parallel Algorithm for Adaptive Numerical Integration

    International Nuclear Information System (INIS)

    Sujatmiko, M.; Basarudin, T.

    1997-01-01

    This paper presents an automation algorithm for integration using adaptive trapezoidal method. The interval is adaptively divided where the width of sub interval are different and fit to the behavior of its function. For a function f, an integration on interval [a,b] can be obtained, with maximum tolerance ε, using estimation (f, a, b, ε). The estimated solution is valid if the error is still in a reasonable range, fulfil certain criteria. If the error is big, however, the problem is solved by dividing it into to similar and independent sub problem on to separate [a, (a+b)/2] and [(a+b)/2, b] interval, i. e. ( f, a, (a+b)/2, ε/2) and (f, (a+b)/2, b, ε/2) estimations. The problems are solved in two different kinds of processor, root processor and worker processor. Root processor function ti divide a main problem into sub problems and distribute them to worker processor. The division mechanism may go further until all of the sub problem are resolved. The solution of each sub problem is then submitted to the root processor such that the solution for the main problem can be obtained. The algorithm is implemented on C-programming-base distributed computer networking system under parallel virtual machine platform

  19. Integrated Association Rules Complete Hiding Algorithms

    Directory of Open Access Journals (Sweden)

    Mohamed Refaat Abdellah

    2017-01-01

    Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.

  20. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  1. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  2. Models and algorithms for Integration of Vehicle and Crew Scheduling

    NARCIS (Netherlands)

    R. Freling (Richard); D. Huisman (Dennis); A.P.M. Wagelmans (Albert)

    2000-01-01

    textabstractThis paper deals with models, relaxations and algorithms for an integrated approach to vehicle and crew scheduling. We discuss potential benefits of integration and provide an overview of the literature, which considers mainly partial integration. Our approach is new in the sense that we

  3. Energy conservation in Newmark based time integration algorithms

    DEFF Research Database (Denmark)

    Krenk, Steen

    2006-01-01

    Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... by the algorithm. The magnitude and character of these terms as well as the associated damping terms are discussed in relation to energy conservation and stability of the algorithms. It is demonstrated that the additional terms in the energy lead to periodic fluctuations of the mechanical energy and are the cause......, and that energy fluctuations take place for integration intervals close to the stability limit. (c) 2006 Elsevier B.V. All rights reserved....

  4. A New Algorithm for System of Integral Equations

    Directory of Open Access Journals (Sweden)

    Abdujabar Rasulov

    2014-01-01

    Full Text Available We develop a new algorithm to solve the system of integral equations. In this new method no need to use matrix weights. Beacause of it, we reduce computational complexity considerable. Using the new algorithm it is also possible to solve an initial boundary value problem for system of parabolic equations. To verify the efficiency, the results of computational experiments are given.

  5. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  6. Bianchi surfaces: integrability in an arbitrary parametrization

    International Nuclear Information System (INIS)

    Nieszporski, Maciej; Sym, Antoni

    2009-01-01

    We discuss integrability of normal field equations of arbitrarily parametrized Bianchi surfaces. A geometric definition of the Bianchi surfaces is presented as well as the Baecklund transformation for the normal field equations in an arbitrarily chosen surface parametrization.

  7. Integrated artificial intelligence algorithm for skin detection

    Directory of Open Access Journals (Sweden)

    Bush Idoko John

    2018-01-01

    Full Text Available The detection of skin colour has been a useful and renowned technique due to its wide range of application in both analyses based on diagnostic and human computer interactions. Various problems could be solved by simply providing an appropriate method for pixel-like skin parts. Presented in this study is a colour segmentation algorithm that works directly in RGB colour space without converting the colour space. Genfis function as used in this study formed the Sugeno fuzzy network and utilizing Fuzzy C-Mean (FCM clustering rule, clustered the data and for each cluster/class a rule is generated. Finally, corresponding output from data mapping of pseudo-polynomial is obtained from input dataset to the adaptive neuro fuzzy inference system (ANFIS.

  8. Canonical algorithms for numerical integration of charged particle motion equations

    Science.gov (United States)

    Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

    2017-02-01

    A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

  9. Fast integral equation algorithms for the solution of electromagnetic wave propagation over general terrains

    Directory of Open Access Journals (Sweden)

    Ibrahim K. Abu Seif

    2015-01-01

    Full Text Available In this paper a fast numerical algorithm to solve an integral equation model for wave propagation along a perfectly conducting two-dimensional terrain is suggested. It is applied to different actual terrain profiles and the results indicate very good agreement with published work. In addition, the proposed algorithm has achieved considerable saving in processing time. The formulation is extended to solve the propagation over lossy dielectric surfaces. A combined field integral equation (CFIE for wave propagation over dielectric terrain is solved efficiently by utilizing the method of moments with complex basis functions. The numerical results for different cases of dielectric surfaces are compared with the results of perfectly conducting surface evaluated by the IE conventional algorithm.

  10. An Integration Algorithm for Bistatic Radar Weak Target Detection

    Directory of Open Access Journals (Sweden)

    Chang Jiajun

    2016-01-01

    Full Text Available The bistatic radar weak target detection problem is considered in this paper. An effective way to detect weak target is the long time integration. However, range migration (RM will occur due to the high speed. Without knowing the target motion parameters, a long time integration algorithm for bistatic radar is proposed in this paper. Firstly, the algorithm utilizes second-order keystone transform (SKT to remove range curvature. Then the quadratic phase term is compensated by the estimated acceleration. After that, SKT is used once more and the Doppler ambiguity phase term compensation is performed. At last, the target energy is integrated via FT. Simulations are provided to show the validity of the proposed algorithm in the end.

  11. Algorithmic properties of the midpoint predictor-corrector time integrator.

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J.; Love, Edward; Scovazzi, Guglielmo

    2009-03-01

    Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.

  12. The Galileo Ground Segment Integrity Algorithms: Design and Performance

    Directory of Open Access Journals (Sweden)

    Carlos Hernández Medel

    2008-01-01

    Full Text Available Galileo, the European Global Navigation Satellite System, will provide to its users highly accurate global positioning services and their associated integrity information. The element in charge of the computation of integrity messages within the Galileo Ground Mission Segment is the integrity processing facility (IPF, which is developed by GMV Aerospace and Defence. The main objective of this paper is twofold: to present the integrity algorithms implemented in the IPF and to show the achieved performance with the IPF software prototype, including aspects such as: implementation of the Galileo overbounding concept, impact of safety requirements on the algorithm design including the threat models for the so-called feared events, and finally the achieved performance with real GPS and simulated Galileo scenarios.

  13. Building Integrated Ontological Knowledge Structures with Efficient Approximation Algorithms

    Science.gov (United States)

    2015-01-01

    The integration of ontologies builds knowledge structures which brings new understanding on existing terminologies and their associations. With the steady increase in the number of ontologies, automatic integration of ontologies is preferable over manual solutions in many applications. However, available works on ontology integration are largely heuristic without guarantees on the quality of the integration results. In this work, we focus on the integration of ontologies with hierarchical structures. We identified optimal structures in this problem and proposed optimal and efficient approximation algorithms for integrating a pair of ontologies. Furthermore, we extend the basic problem to address the integration of a large number of ontologies, and correspondingly we proposed an efficient approximation algorithm for integrating multiple ontologies. The empirical study on both real ontologies and synthetic data demonstrates the effectiveness of our proposed approaches. In addition, the results of integration between gene ontology and National Drug File Reference Terminology suggest that our method provides a novel way to perform association studies between biomedical terms. PMID:26550571

  14. Building Integrated Ontological Knowledge Structures with Efficient Approximation Algorithms

    Directory of Open Access Journals (Sweden)

    Yang Xiang

    2015-01-01

    Full Text Available The integration of ontologies builds knowledge structures which brings new understanding on existing terminologies and their associations. With the steady increase in the number of ontologies, automatic integration of ontologies is preferable over manual solutions in many applications. However, available works on ontology integration are largely heuristic without guarantees on the quality of the integration results. In this work, we focus on the integration of ontologies with hierarchical structures. We identified optimal structures in this problem and proposed optimal and efficient approximation algorithms for integrating a pair of ontologies. Furthermore, we extend the basic problem to address the integration of a large number of ontologies, and correspondingly we proposed an efficient approximation algorithm for integrating multiple ontologies. The empirical study on both real ontologies and synthetic data demonstrates the effectiveness of our proposed approaches. In addition, the results of integration between gene ontology and National Drug File Reference Terminology suggest that our method provides a novel way to perform association studies between biomedical terms.

  15. Firefly Algorithm for Polynomial Bézier Surface Parameterization

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.

  16. Molecular dynamics algorithms for path integrals at constant pressure

    Science.gov (United States)

    Martyna, Glenn J.; Hughes, Adam; Tuckerman, Mark E.

    1999-02-01

    Extended system path integral molecular dynamics algorithms have been developed that can generate efficiently averages in the quantum mechanical canonical ensemble [M. E. Tuckerman, B. J. Berne, G. J. Martyna, and M. L. Klein, J. Chem. Phys. 99, 2796 (1993)]. Here, the corresponding extended system path integral molecular dynamics algorithms appropriate to the quantum mechanical isothermal-isobaric ensembles with isotropic-only and full system cell fluctuations are presented. The former ensemble is employed to study fluid systems which do not support shear modes while the latter is employed to study solid systems. The algorithms are constructed by deriving appropriate dynamical equations of motions and developing reversible multiple time step algorithms to integrate the equations numerically. Effective parallelization schemes for distributed memory computers are presented. The new numerical methods are tested on model (a particle in a periodic potential) and realistic (liquid and solid para-hydrogen and liquid butane) systems. In addition, the methodology is extended to treat the path integral centroid dynamics scheme, [J. Cao and G. A. Voth, J. Chem. Phys. 99, 10070 (1993)], a novel method which is capable of generating semiclassical approximations to quantum mechanical time correlation functions.

  17. Impact of surface integrity on machining productivity

    International Nuclear Information System (INIS)

    Koster, W.P.

    1975-01-01

    Surface integrity data developed in recent years have brought into critical focus many situations where component performance is dependent upon the characteristics of machined surfaces. The surface integrity behavior of important structural materials--steel, nickel, and titanium base alloys--described. Surfaces characteristically produced by milling, grinding, electrical discharge machining, and electrochemical machining are illustrated. These same surfaces are characterized in terms of residual stress profiles and fatigue behavior. Specific instances where surface integrity information has been used to overcome production problems or to enhance the reliability of components are cited. Examples of situations where relative insensitivity to metal removal variables has permitted significant reductions in machining costs are also discussed. Finally, manufacturing methods necessary to achieve adequate surface integrity are described, along with suggestions for the control of processing through adequate specifications and inspection procedures. 14 figures

  18. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  19. A fuzzy simulated evolution algorithm for integrated manufacturing system design

    Directory of Open Access Journals (Sweden)

    Michael Mutingi

    2013-04-01

    Full Text Available Integrated cell formation and layout (CFLP is an extended application of the group technology philosophy in which machine cells and cell layout are addressed simultaneously. The aim of this technological innovation is to improve both productivity and flexibility in modern manufacturing industry. However, due to its combinatorial complexity, the cell formation and layout problem is best solved by heuristic and metaheuristic approaches. As CFLP is prevalent in manufacturing industry, developing robust and efficient solution methods for the problem is imperative. This study seeks to develop a fuzzy simulated evolution algorithm (FSEA that integrates fuzzy-set theoretic concepts and the philosophy of constructive perturbation and evolution. Deriving from the classical simulated evolution algorithm, the search efficiency of the major phases of the algorithm is enhanced, including initialization, evaluation, selection and reconstruction. Illustrative computational experiments based on existing problem instances from the literature demonstrate the utility and the strength of the FSEA algorithm developed in this study. It is anticipated in this study that the application of the algorithm can be extended to other complex combinatorial problems in industry.

  20. A genetic algorithm approach in interface and surface structure optimization

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jian [Iowa State Univ., Ames, IA (United States)

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  1. Non-integrability of geodesic flow on certain algebraic surfaces

    International Nuclear Information System (INIS)

    Waters, T.J.

    2012-01-01

    This Letter addresses an open problem recently posed by V. Kozlov: a rigorous proof of the non-integrability of the geodesic flow on the cubic surface xyz=1. We prove this is the case using the Morales–Ramis theorem and Kovacic algorithm. We also consider some consequences and extensions of this result. -- Highlights: ► The behaviour of geodesics on surfaces defined by algebraic expressions is studied. ► The non-integrability of the geodesic equations is rigorously proved using differential Galois theory. ► Morales–Ramis theory and Kovacic's algorithm is used and the normal variational equation is of Fuchsian type. ► Some extensions and limitations are discussed.

  2. SINS/CNS Nonlinear Integrated Navigation Algorithm for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Yong-jun Yu

    2015-01-01

    Full Text Available Celestial Navigation System (CNS has characteristics of accurate orientation and strong autonomy and has been widely used in Hypersonic Vehicle. Since the CNS location and orientation mainly depend upon the inertial reference that contains errors caused by gyro drifts and other error factors, traditional Strap-down Inertial Navigation System (SINS/CNS positioning algorithm setting the position error between SINS and CNS as measurement is not effective. The model of altitude azimuth, platform error angles, and horizontal position is designed, and the SINS/CNS tightly integrated algorithm is designed, in which CNS altitude azimuth is set as measurement information. GPF (Gaussian particle filter is introduced to solve the problem of nonlinear filtering. The results of simulation show that the precision of SINS/CNS algorithm which reaches 130 m using three stars is improved effectively.

  3. Global structual optimizations of surface systems with a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, Feng-Chuan [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Aln algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.

  4. Advanced computer algebra algorithms for the expansion of Feynman integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-10-15

    Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+{epsilon}-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.

  5. Multifeature Fusion Vehicle Detection Algorithm Based on Choquet Integral

    Directory of Open Access Journals (Sweden)

    Wenhui Li

    2014-01-01

    Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.

  6. Integrated mold/surface-micromachining process

    Energy Technology Data Exchange (ETDEWEB)

    Barron, C.C.; Fleming, J.G.; Montague, S.; Sniegowski, J.J.; Hetherington, D.L.

    1996-03-01

    We detail a new monolithically integrated silicon mold/surface-micromachining process which makes possible the fabrication of stiff, high-aspect-ratio micromachined structures integrated with finely detailed, compliant structures. An important example, which we use here as our process demonstration vehicle, is that of an accelerometer with a large proof mass and compliant suspension. The proof mass is formed by etching a mold into the silicon substrate, lining the mold with oxide, filling it with mechanical polysilicon, and then planarizing back to the level of the substrate. The resulting molded structure is recessed into the substrate, forming a planar surface ideal for subsequent processing. We then add surface-micromachined springs and sense contacts. The principal advantage of this new monolithically integrated mold/surface-micromachining process is that it decouples the design of the different sections of the device: In the case of a sensitive accelerometer, it allows us to optimize independently the proof mass, which needs to be as large, stiff, and heavy as possible, and the suspension, which needs to be as delicate and compliant as possible. The fact that the high-aspect-ratio section of the device is embedded in the substrate enables the monolithic integration of high-aspect-ratio parts with surface-micromachined mechanical parts, and, in the future, also electronics. We anticipate that such an integrated mold/surface micromachining/electronics process will offer versatile high-aspect-ratio micromachined structures that can be batch-fabricated and monolithically integrated into complex microelectromechanical systems.

  7. An analysis of 3D particle path integration algorithms

    International Nuclear Information System (INIS)

    Darmofal, D.L.; Haimes, R.

    1996-01-01

    Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow

  8. ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems

    Science.gov (United States)

    Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  9. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  10. An Overview of the JPSS Ground Project Algorithm Integration Process

    Science.gov (United States)

    Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.

    2015-12-01

    The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to

  11. Surface solar irradiance from SCIAMACHY measurements: algorithm and validation

    Directory of Open Access Journals (Sweden)

    P. Wang

    2011-05-01

    Full Text Available Broadband surface solar irradiances (SSI are, for the first time, derived from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY satellite measurements. The retrieval algorithm, called FRESCO (Fast REtrieval Scheme for Clouds from the Oxygen A band SSI, is similar to the Heliosat method. In contrast to the standard Heliosat method, the cloud index is replaced by the effective cloud fraction derived from the FRESCO cloud algorithm. The MAGIC (Mesoscale Atmospheric Global Irradiance Code algorithm is used to calculate clear-sky SSI. The SCIAMACHY SSI product is validated against globally distributed BSRN (Baseline Surface Radiation Network measurements and compared with ISCCP-FD (International Satellite Cloud Climatology Project Flux Dataset surface shortwave downwelling fluxes (SDF. For one year of data in 2008, the mean difference between the instantaneous SCIAMACHY SSI and the hourly mean BSRN global irradiances is −4 W m−2 (−1 % with a standard deviation of 101 W m−2 (20 %. The mean difference between the globally monthly mean SCIAMACHY SSI and ISCCP-FD SDF is less than −12 W m−2 (−2 % for every month in 2006 and the standard deviation is 62 W m−2 (12 %. The correlation coefficient is 0.93 between SCIAMACHY SSI and BSRN global irradiances and is greater than 0.96 between SCIAMACHY SSI and ISCCP-FD SDF. The evaluation results suggest that the SCIAMACHY SSI product achieves similar mean bias error and root mean square error as the surface solar irradiances derived from polar orbiting satellites with higher spatial resolution.

  12. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  13. Fully automated algorithm for wound surface area assessment.

    Science.gov (United States)

    Deana, Alessandro Melo; de Jesus, Sérgio Henrique Costa; Sampaio, Brunna Pileggi Azevedo; Oliveira, Marcelo Tavares; Silva, Daniela Fátima Teixeira; França, Cristiane Miranda

    2013-01-01

    Worldwide, clinicians, dentists, nurses, researchers, and other health professionals need to monitor the wound healing progress and to quantify the rate of wound closure. The aim of this study is to demonstrate, step by step, a fully automated numerical method to estimate the size of the wound and the percentage damaged relative to the body surface area (BSA) in images, without the requirement for human intervention. We included the formula for BSA in rats in the algorithm. The methodology was validated in experimental wounds and human ulcers and was compared with the analysis of an experienced pathologist, with good agreement. Therefore, this algorithm is suitable for experimental wounds and burns and human ulcers, as they have a high contrast with adjacent normal skin. © 2013 by the Wound Healing Society.

  14. Integrated manufacturing of complex freeform surfaces

    Science.gov (United States)

    Niehaus, Frank; Huttenhuis, Stephan; Pisarski, Alex

    2013-09-01

    Innovative freeform optical systems such as head-up displays or LED headlights often require high quality and high volume optics. Injection molded polymer optics offer a cost effective solution. However, mold manufacturing for this process is extremely challenging as the machining of freeform surfaces is currently characterized by several independent production steps which can limit surface accuracy. By integrating diamond turning, milling, and metrology onto a single platform, the UPC 400 improves surface accuracy. Advanced software for machining and measurement data further reduces surface inaccuracies. This combination makes the UPC 400 efficient for prototyping free-form optics and manufacturing high precision molds.

  15. Identifying the Right Surface for the Right Patient at the Right Time: Generation and Content Validation of an Algorithm for Support Surface Selection

    Science.gov (United States)

    McNichol, Laurie; Watts, Carolyn; Mackey, Dianne; Beitz, Janice M.

    2015-01-01

    Support surfaces are an integral component of pressure ulcer prevention and treatment, but there is insufficient evidence to guide clinical decision making in this area. In an effort to provide clinical guidance for selecting support surfaces based on individual patient needs, the Wound, Ostomy and Continence Nurses Society (WOCN®) set out to develop an evidence- and consensus-based algorithm. A Task Force of clinical experts was identified who: 1) reviewed the literature and identified evidence for support surface use in the prevention and treatment of pressure ulcers; 2) developed supporting statements for essential components for the algorithm, 3) developed a draft algorithm for support surface selection; and 4) determined its face validity. A consensus panel of 20 key opinion leaders was then convened that: 1.) reviewed the draft algorithm and supporting statements, 2.) reached consensus on statements lacking robust supporting evidence, 3.) modified the draft algorithm and evaluated its content validity. The Content Validity Index (CVI) for the algorithm was strong (0.95 out of 1.0) with an overall mean score of 3.72 (out of 1 to 4), suggesting that the steps were appropriate to the purpose of the algorithm. To our knowledge, this is the first evidence and consensus based algorithm for support surface selection that has undergone content validation. PMID:25549306

  16. Identifying the right surface for the right patient at the right time: generation and content validation of an algorithm for support surface selection.

    Science.gov (United States)

    McNichol, Laurie; Watts, Carolyn; Mackey, Dianne; Beitz, Janice M; Gray, Mikel

    2015-01-01

    Support surfaces are an integral component of pressure ulcer prevention and treatment, but there is insufficient evidence to guide clinical decision making in this area. In an effort to provide clinical guidance for selecting support surfaces based on individual patient needs, the Wound, Ostomy and Continence Nurses Society (WOCN®) set out to develop an evidence- and consensus-based algorithm. A Task Force of clinical experts was identified who: 1) reviewed the literature and identified evidence for support surface use in the prevention and treatment of pressure ulcers; 2) developed supporting statements for essential components for the algorithm, 3) developed a draft algorithm for support surface selection; and 4) determined its face validity. A consensus panel of 20 key opinion leaders was then convened that: 1.) reviewed the draft algorithm and supporting statements, 2.) reached consensus on statements lacking robust supporting evidence, 3.) modified the draft algorithm and evaluated its content validity. The Content Validity Index (CVI) for the algorithm was strong (0.95 out of 1.0) with an overall mean score of 3.72 (out of 1 to 4), suggesting that the steps were appropriate to the purpose of the algorithm. To our knowledge, this is the first evidence and consensus based algorithm for support surface selection that has undergone content validation.

  17. Integrable mappings via rational elliptic surfaces

    International Nuclear Information System (INIS)

    Tsuda, Teruhisa

    2004-01-01

    We present a geometric description of the QRT map (which is an integrable mapping introduced by Quispel, Roberts and Thompson) in terms of the addition formula of a rational elliptic surface. By this formulation, we classify all the cases when the QRT map is periodic; and show that its period is 2, 3, 4, 5 or 6. A generalization of the QRT map which acts birationally on a pencil of K3 surfaces, or Calabi-Yau manifolds, is also presented

  18. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    Science.gov (United States)

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  19. Monitoring Antarctic ice sheet surface melting with TIMESAT algorithm

    Science.gov (United States)

    Ye, Y.; Cheng, X.; Li, X.; Liang, L.

    2011-12-01

    Antarctic ice sheet contributes significantly to the global heat budget by controlling the exchange of heat, moisture, and momentum at the surface-atmosphere interface, which directly influence the global atmospheric circulation and climate change. Ice sheet melting will cause snow humidity increase, which will accelerate the disintegration and movement of ice sheet. As a result, detecting Antarctic ice sheet melting is essential for global climate change research. In the past decades, various methods have been proposed for extracting snowmelt information from multi-channel satellite passive microwave data. Some methods are based on brightness temperature values or a composite index of them, and others are based on edge detection. TIMESAT (Time-series of Satellite sensor data) is an algorithm for extracting seasonality information from time-series of satellite sensor data. With TIMESAT long-time series brightness temperature (SSM/I 19H) is simulated by Double Logistic function. Snow is classified to wet and dry snow with generalized Gaussian model. The results were compared with those from a wavelet algorithm. On this basis, Antarctic automatic weather station data were used for ground verification. It shows that this algorithm is effective in ice sheet melting detection. The spatial distribution of melting areas(Fig.1) shows that, the majority of melting areas are located on the edge of Antarctic ice shelf region. It is affected by land cover type, surface elevation and geographic location (latitude). In addition, the Antarctic ice sheet melting varies with seasons. It is particularly acute in summer, peaking at December and January, staying low in March. In summary, from 1988 to 2008, Ross Ice Shelf and Ronnie Ice Shelf have the greatest interannual variability in amount of melting, which largely determines the overall interannual variability in Antarctica. Other regions, especially Larsen Ice Shelf and Wilkins Ice Shelf, which is in the Antarctic Peninsula

  20. Developing an integrated digitizing and display surface

    Science.gov (United States)

    Hipple, James D.; Wedding, Daniel K.; Wedding, Donald K., Sr.

    1995-04-01

    The development of an integrated digitizing and display surface, which utilizes touch entry and flat panel display (FPD) technology, is a significant hardware advance in the field of geographic information systems (GIS). Inherent qualities of the FPD, notably the ac gas plasma display, makes such a marriage inevitable. Large diagonal sizes, high resolution color, screen flatness, and monitor thickness are desirable features of an integrated digitizing and display surface. Recently, the GIS literature has addressed a need for such an innovation. The development of graphics displays based on sophisticated technologies include `photorealistic' (or high definition) imaging at resolutions of 2048 X 2048 or greater, palates of 16.7 million colors, formats greater than 30 inches diagonal, and integrated touch entry. In this paper, there is an evaluation of FPDs and data input technologies in the development of such a product.

  1. Explicit symplectic integrators of molecular dynamics algorithms for rigid-body molecules in the canonical, isobaric-isothermal, and related ensembles.

    Science.gov (United States)

    Okumura, Hisashi; Itoh, Satoru G; Okamoto, Yuko

    2007-02-28

    The authors propose explicit symplectic integrators of molecular dynamics (MD) algorithms for rigid-body molecules in the canonical and isobaric-isothermal ensembles. They also present a symplectic algorithm in the constant normal pressure and lateral surface area ensemble and that combined with the Parrinello-Rahman algorithm. Employing the symplectic integrators for MD algorithms, there is a conserved quantity which is close to Hamiltonian. Therefore, they can perform a MD simulation more stably than by conventional nonsymplectic algorithms. They applied this algorithm to a TIP3P pure water system at 300 K and compared the time evolution of the Hamiltonian with those by the nonsymplectic algorithms. They found that the Hamiltonian was conserved well by the symplectic algorithm even for a time step of 4 fs. This time step is longer than typical values of 0.5-2 fs which are used by the conventional nonsymplectic algorithms.

  2. Four (Algorithms) in One (Bag): An Integrative Framework of Knowledge for Teaching the Standard Algorithms of the Basic Arithmetic Operations

    Science.gov (United States)

    Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit

    2016-01-01

    In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…

  3. Integrated control algorithms for plant environment in greenhouse

    Science.gov (United States)

    Zhang, Kanyu; Deng, Lujuan; Gong, Youmin; Wang, Shengxue

    2003-09-01

    In this paper a survey of plant environment control in artificial greenhouse was put forward for discussing the future development. Firstly, plant environment control started with the closed loop control of air temperature in greenhouse. With the emergence of higher property computer, the adaptive control algorithm and system identification were integrated into the control system. As adaptation control is more depending on observation of variables by sensors and yet many variables are unobservable or difficult to observe, especially for observation of crop growth status, so model-based control algorithm were developed. In order to evade modeling difficulty, one method is predigesting the models and the other method is utilizing fuzzy logic and neural network technology that realize the models by the black box and gray box theory. Studies on control method of plant environment in greenhouse by means of expert system (ES) and artificial intelligence (AI) have been initiated and developed. Nowadays, the research of greenhouse environment control focus on energy saving, optimal economic profit, enviornment protection and continualy develop.

  4. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  5. An algorithm to use higher order invariants for modelling potential energy surface of nanoclusters

    Science.gov (United States)

    Jindal, Shweta; Bulusu, Satya S.

    2018-02-01

    In order to fit potential energy surface (PES) of gold nanoclusters, we have integrated bispectrum features with artificial neural network (ANN) learning technique in this work. We have also devised an algorithm for selecting the frequencies that need to be coupled for extracting the phase information between different frequency bands. We have found that higher order invariant like bispectrum is highly efficient in exploring the PES as compared to other invariants. The sensitivity of bispectrum can also be exploited in acting as an order parameter for calculating many thermodynamic properties of nanoclusters.

  6. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    Science.gov (United States)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  7. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    Science.gov (United States)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is

  8. Signal Integrity Applications of an EBG Surface

    Directory of Open Access Journals (Sweden)

    MATEKOVITS, L.

    2015-05-01

    Full Text Available Electromagnetic band-gap (EBG surfaces have found applications in mitigation of parallel-plate noise that occurs in high speed circuits. A 2D periodic structure previously introduced by the same authors is dimensioned here for adjusting EBG parameters in view of meeting applications requirements by decreasing the phase velocity of the propagating waves. This adjustment corresponds to decreasing the lower bound of the EBG spectra. The positions of the EBGs' in frequency are determined through full-wave simulation, by solving the corresponding eigenmode equation and by imposing the appropriate boundary conditions on all faces of the unit cell. The operation of a device relying on a finite surface is also demonstrated. Obtained results show that the proposed structure fits for the signal integrity related applications as verified also by comparing the transmission along a finite structure of an ideal signal line and one with an induced discontinuity.

  9. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  10. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  11. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    Science.gov (United States)

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  12. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  13. Near-Surface Engineered Environmental Barrier Integrity

    International Nuclear Information System (INIS)

    Piet, S.J.; Breckenridge, R.P.

    2002-01-01

    The INEEL Environmental Systems Research and Analysis (ESRA) program has launched a new R and D project on Near-Surface Engineered Environmental Barrier Integrity to increase knowledge and capabilities for using engineering and ecological components to improve the integrity of near-surface barriers used to confine contaminants from the public and the environment. The knowledge gained and the capabilities built will help verify the adequacy of past remedial decisions and enable improved solutions for future cleanup decisions. The research is planned to (a) improve the knowledge of degradation mechanisms (weathering, biological, geological, chemical, radiological, and catastrophic) in times shorter than service life, (b) improve modeling of barrier degradation dynamics, (c) develop sensor systems to identify degradation prior to failure, and (d) provide a better basis for developing and testing of new barrier systems to increase reliability and reduce the risk of failure. Our project combine s selected exploratory studies (benchtop and field scale), coupled effects accelerated aging testing and the meso-scale, testing of new monitoring concepts, and modeling of dynamic systems. The performance of evapo-transpiration, capillary, and grout-based barriers will be examined

  14. Multisensor satellite data integration for sea surface wind speed and direction determination

    Science.gov (United States)

    Glackin, D. L.; Pihos, G. G.; Wheelock, S. L.

    1984-01-01

    Techniques to integrate meteorological data from various satellite sensors to yield a global measure of sea surface wind speed and direction for input to the Navy's operational weather forecast models were investigated. The sensors were launched or will be launched, specifically the GOES visible and infrared imaging sensor, the Nimbus-7 SMMR, and the DMSP SSM/I instrument. An algorithm for the extrapolation to the sea surface of wind directions as derived from successive GOES cloud images was developed. This wind veering algorithm is relatively simple, accounts for the major physical variables, and seems to represent the best solution that can be found with existing data. An algorithm for the interpolation of the scattered observed data to a common geographical grid was implemented. The algorithm is based on a combination of inverse distance weighting and trend surface fitting, and is suited to combing wind data from disparate sources.

  15. INTEGRATION OF HETEROGENOUS DIGITAL SURFACE MODELS

    Directory of Open Access Journals (Sweden)

    R. Boesch

    2012-08-01

    distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2 has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement" uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion" an anisotropic inverse distance weighting (IDW will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library, GDAL (Geospatial Data Abstraction Library and OpenCV (Open Source Computer Vision.

  16. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    Science.gov (United States)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  17. Leakage detection algorithm integrating water distribution networks hydraulic model

    CSIR Research Space (South Africa)

    Adedeji, K

    2017-06-01

    Full Text Available and estimation is vital for effective water service. For effective detection of background leakages, a hydraulic analysis of flow characteristics in water piping networks is indispensable for appraising such type of leakage. A leakage detection algorithm...

  18. High speed numerical integration algorithm using FPGA | Razak ...

    African Journals Online (AJOL)

    RRS), Middle Riemann Sum (MRS) and Trapezoidal Sum (TS) algorithms. The system performance is evaluated based on target chip Altera Cyclone IV FPGA in the metrics of resources utilization, clock latency, execution time, power consumption ...

  19. Surface roughness optimization in machining of AZ31 magnesium alloy using ABC algorithm

    Directory of Open Access Journals (Sweden)

    Abhijith

    2018-01-01

    Full Text Available Magnesium alloys serve as excellent substitutes for materials traditionally used for engine block heads in automobiles and gear housings in aircraft industries. AZ31 is a magnesium alloy finds its applications in orthopedic implants and cardiovascular stents. Surface roughness is an important parameter in the present manufacturing sector. In this work optimization techniques namely firefly algorithm (FA, particle swarm optimization (PSO and artificial bee colony algorithm (ABC which are based on swarm intelligence techniques, have been implemented to optimize the machining parameters namely cutting speed, feed rate and depth of cut in order to achieve minimum surface roughness. The parameter Ra has been considered for evaluating the surface roughness. Comparing the performance of ABC algorithm with FA and PSO algorithm, which is a widely used optimization algorithm in machining studies, the results conclude that ABC produces better optimization when compared to FA and PSO for optimizing surface roughness of AZ 31.

  20. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation...

  1. Microfabricated Microwave-Integrated Surface Ion Trap

    Science.gov (United States)

    Revelle, Melissa C.; Blain, Matthew G.; Haltli, Raymond A.; Hollowell, Andrew E.; Nordquist, Christopher D.; Maunz, Peter

    2017-04-01

    Quantum information processing holds the key to solving computational problems that are intractable with classical computers. Trapped ions are a physical realization of a quantum information system in which qubits are encoded in hyperfine energy states. Coupling the qubit states to ion motion, as needed for two-qubit gates, is typically accomplished using Raman laser beams. Alternatively, this coupling can be achieved with strong microwave gradient fields. While microwave radiation is easier to control than a laser, it is challenging to precisely engineer the radiated microwave field. Taking advantage of Sandia's microfabrication techniques, we created a surface ion trap with integrated microwave electrodes with sub-wavelength dimensions. This multi-layered device permits co-location of the microwave antennae and the ion trap electrodes to create localized microwave gradient fields and necessary trapping fields. Here, we characterize the trap design and present simulated microwave performance with progress towards experimental results. This research was funded, in part, by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA).

  2. Application of the tuning algorithm with the least squares approximation to the suboptimal control algorithm for integrating objects

    Science.gov (United States)

    Kuzishchin, V. F.; Merzlikina, E. I.; Van Va, Hoang

    2017-11-01

    The problem of PID and PI-algorithms tuning by means of the approximation by the least square method of the frequency response of a linear algorithm to the sub-optimal algorithm is considered. The advantage of the method is that the parameter values are obtained through one cycle of calculation. Recommendations how to choose the parameters of the least square method taking into consideration the plant dynamics are given. The parameters mentioned are the time constant of the filter, the approximation frequency range and the correction coefficient for the time delay parameter. The problem is considered for integrating plants for some practical cases (the level control system in a boiler drum). The transfer function of the suboptimal algorithm is determined relating to the disturbance that acts in the point of the control impact input, it is typical for thermal plants. In the recommendations it is taken into consideration that the overregulation for the transient process when the setpoint is changed is also limited. In order to compare the results the systems under consideration are also calculated by the classical method with the limited frequency oscillation index. The results given in the paper can be used by specialists dealing with tuning systems with the integrating plants.

  3. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  4. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    Science.gov (United States)

    2016-02-01

    SHOC 4 3. Related Works 4 4. Backtrack Branch and Bound 5 5. Autotuning Background 8 6. Method 10 7. Results 12 8. Conclusions 16 9. Future...autotuning methodologies to compare optimal performance of various architectures. The work presented here builds upon this by adding the Backtrack ...only OpenCL. 4. Backtrack Branch and Bound The BBB algorithm is a way to search for a solution to a problem among a variety of potential solutions

  5. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  6. Mapping Surface Broadband Albedo from Satellite Observations: A Review of Literatures on Algorithms and Products

    Directory of Open Access Journals (Sweden)

    Ying Qu

    2015-01-01

    Full Text Available Surface albedo is one of the key controlling geophysical parameters in the surface energy budget studies, and its temporal and spatial variation is closely related to the global climate change and regional weather system due to the albedo feedback mechanism. As an efficient tool for monitoring the surfaces of the Earth, remote sensing is widely used for deriving long-term surface broadband albedo with various geostationary and polar-orbit satellite platforms in recent decades. Moreover, the algorithms for estimating surface broadband albedo from satellite observations, including narrow-to-broadband conversions, bidirectional reflectance distribution function (BRDF angular modeling, direct-estimation algorithm and the algorithms for estimating albedo from geostationary satellite data, are developed and improved. In this paper, we present a comprehensive literature review on algorithms and products for mapping surface broadband albedo with satellite observations and provide a discussion of different algorithms and products in a historical perspective based on citation analysis of the published literature. This paper shows that the observation technologies and accuracy requirement of applications are important, and long-term, global fully-covered (including land, ocean, and sea-ice surfaces, gap-free, surface broadband albedo products with higher spatial and temporal resolution are required for climate change, surface energy budget, and hydrological studies.

  7. Assessment of available integration algorithms for initial value ordinary differential equations

    International Nuclear Information System (INIS)

    Carver, M.B.; Stewart, D.G.

    1979-11-01

    There exists an extremely large number of algorithms designed for the ordinary differential equation initial value problem. The integration is normally done by a finite sum at time intervals which are chosen dynamically to satisfy an imposed error tolerance. This report describes the basic logistics of the integration process, identifies common areas of difficulty, and establishes a comprehensive test profile for integration algorithms. A number of algorithms are described, and selected published subroutines are evaluated using the test profile. It concludes that an effective library for general use need have only two such routines. The two selected are versions of the well-known Gear and Runge-Kutta-Fehlberg algorithms. Full documentation and listings are included. (auth)

  8. AntStar: Enhancing Optimization Problems by Integrating an Ant System and A⁎ Algorithm

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-01-01

    Full Text Available Recently, nature-inspired techniques have become valuable to many intelligent systems in different fields of technology and science. Among these techniques, Ant Systems (AS have become a valuable technique for intelligent systems in different fields. AS is a computational system inspired by the foraging behavior of ants and intended to solve practical optimization problems. In this paper, we introduce the AntStar algorithm, which is swarm intelligence based. AntStar enhances the optimization and performance of an AS by integrating the AS and A⁎ algorithm. Applying the AntStar algorithm to the single-source shortest-path problem has been done to ensure the efficiency of the proposed AntStar algorithm. The experimental result of the proposed algorithm illustrated the robustness and accuracy of the AntStar algorithm.

  9. Comparison of a Local Linearization Algorithm with Standard Numerical Integration Methods for Real-Time Simulation

    DEFF Research Database (Denmark)

    Cook, Gerald; Lin, Ching-Fang

    1980-01-01

    The local linearization algorithm is presented as a possible numerical integration scheme to be used in real-time simulation. A second-order nonlinear example problem is solved using different methods. The local linearization approach is shown to require less computing time and give significant...... improvement in accuracy over the classical second-order integration methods....

  10. A differential algebraic integration algorithm for symplectic mappings in systems with three-dimensional magnetic field

    International Nuclear Information System (INIS)

    Chang, P.; Lee, S.Y.; Yan, Y.T.

    2006-01-01

    A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders

  11. A Differential Algebraic Integration Algorithm for Symplectic Mappings in Systems with Three-Dimensional Magnetic Field

    International Nuclear Information System (INIS)

    Chang, P

    2004-01-01

    A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders

  12. An integral Riemann-Roch theorem for surface bundles

    DEFF Research Database (Denmark)

    Madsen, Ib Henning

    2010-01-01

    This paper is a response to a conjecture by T. Akita about an integral Riemann–Roch theorem for surface bundles.......This paper is a response to a conjecture by T. Akita about an integral Riemann–Roch theorem for surface bundles....

  13. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  14. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mărăscu, V.; Dinescu, G. [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania); Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Chiţescu, I. [Faculty of Mathematics and Computer Science, University of Bucharest, 14 Academiei Street, Bucharest (Romania); Barna, V. [Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B., E-mail: mitub@infim.ro [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania)

    2016-03-25

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.

  15. Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

    Science.gov (United States)

    Kitazono, Jun; Kanai, Ryota; Oizumi, Masafumi

    2018-03-01

    The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ($\\Phi$) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of $\\Phi$ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of $\\Phi$ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of $\\Phi$ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure $\\Phi$ in large systems within a practical amount of time.

  16. Braking distance algorithm for autonomous cars using road surface recognition

    Science.gov (United States)

    Kavitha, C.; Ashok, B.; Nanthagopal, K.; Desai, Rohan; Rastogi, Nisha; Shetty, Siddhanth

    2017-11-01

    India is yet to accept semi/fully - autonomous cars and one of the reasons, was loss of control on bad roads. For a better handling on these roads we require advanced braking and that can be done by adapting electronics into the conventional type of braking. In Recent years, the automation in braking system led us to various benefits like traction control system, anti-lock braking system etc. This research work describes and experiments the method for recognizing road surface profile and calculating braking distance. An ultra-sonic surface recognition sensor, mounted underneath the car will send a high frequency wave on to the road surface, which is received by a receiver with in the sensor, it calculates the time taken for the wave to rebound and thus calculates the distance from the point where sensor is mounted. A displacement graph will be plotted based on the output of the sensor. A relationship can be derived between the displacement plot and roughness index through which the friction coefficient can be derived in Matlab for continuous calculation throughout the distance travelled. Since it is a non-contact type of profiling, it is non-destructive. The friction coefficient values received in real-time is used to calculate optimum braking distance. This system, when installed on normal cars can also be used to create a database of road surfaces, especially in cities, which can be shared with other cars. This will help in navigation as well as making the cars more efficient.

  17. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  18. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  19. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    Energy Technology Data Exchange (ETDEWEB)

    Grout, Ray; Kolla, Hemanth; Minion, Michael; Bell, John

    2017-01-01

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.

  20. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    Energy Technology Data Exchange (ETDEWEB)

    Grout, Ray; Kolla, Hemanth; Minion, Michael; Bell, John

    2017-01-01

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.

  1. An integrated DEA-COLS-SFA algorithm for optimization and policy making of electricity distribution units

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.; Eivazy, H.

    2009-01-01

    This paper presents an integrated data envelopment analysis (DEA)-corrected ordinary least squares (COLS)-stochastic frontier analysis (SFA)-principal component analysis (PCA)-numerical taxonomy (NT) algorithm for performance assessment, optimization and policy making of electricity distribution units. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study proposes an integrated flexible approach to measure the rank and choose the best version of the DEA method for optimization and policy making purposes. It covers both static and dynamic aspects of information environment due to involvement of SFA which is finally compared with the best DEA model through the Spearman correlation technique. The integrated approach would yield in improved ranking and optimization of electricity distribution systems. To illustrate the usability and reliability of the proposed algorithm, 38 electricity distribution units in Iran have been considered, ranked and optimized by the proposed algorithm of this study.

  2. An API for Integrating Spatial Context Models with Spatial Reasoning Algorithms

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun

    2006-01-01

    The integration of context-aware applications with spatial context models is often done using a common query language. However, algorithms that estimate and reason about spatial context information can benefit from a tighter integration. An object-oriented API makes such integration possible...... and can help reduce the complexity of algorithms making them easier to maintain and develop. This paper propose an object-oriented API for context models of the physical environment and extensions to a location modeling approach called geometric space trees for it to provide adequate support for location...... modeling. The utility of the API is evaluated in several real-world cases from an indoor location system, and spans several types of spatial reasoning algorithms....

  3. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  4. [An operational remote sensing algorithm of land surface evapotranspiration based on NOAA PAL dataset].

    Science.gov (United States)

    Hou, Ying-Yu; He, Yan-Bo; Wang, Jian-Lin; Tian, Guo-Liang

    2009-10-01

    Based on the time series 10-day composite NOAA Pathfinder AVHRR Land (PAL) dataset (8 km x 8 km), and by using land surface energy balance equation and "VI-Ts" (vegetation index-land surface temperature) method, a new algorithm of land surface evapotranspiration (ET) was constructed. This new algorithm did not need the support from meteorological observation data, and all of its parameters and variables were directly inversed or derived from remote sensing data. A widely accepted ET model of remote sensing, i. e., SEBS model, was chosen to validate the new algorithm. The validation test showed that both the ET and its seasonal variation trend estimated by SEBS model and our new algorithm accorded well, suggesting that the ET estimated from the new algorithm was reliable, being able to reflect the actual land surface ET. The new ET algorithm of remote sensing was practical and operational, which offered a new approach to study the spatiotemporal variation of ET in continental scale and global scale based on the long-term time series satellite remote sensing images.

  5. Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface

    International Nuclear Information System (INIS)

    Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho

    2005-01-01

    While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm

  6. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application.

    Science.gov (United States)

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Wu, Mouyan; Fan, Xiaoliang

    2018-01-26

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme.

  7. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    Science.gov (United States)

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  8. Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA

    Science.gov (United States)

    Meyer, Christoph

    2018-01-01

    The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.

  9. Integrated Surface Power Strategy for Mars

    Science.gov (United States)

    Rucker, Michelle

    2015-01-01

    A National Aeronautics and Space Administration (NASA) study team evaluated surface power needs for a conceptual crewed 500-day Mars mission. This study had four goals: 1. Determine estimated surface power needed to support the reference mission; 2. Explore alternatives to minimize landed power system mass; 3. Explore alternatives to minimize Mars Lander power self-sufficiency burden; and 4. Explore alternatives to minimize power system handling and surface transportation mass. The study team concluded that Mars Ascent Vehicle (MAV) oxygen propellant production drives the overall surface power needed for the reference mission. Switching to multiple, small Kilopower fission systems can potentially save four to eight metric tons of landed mass, as compared to a single, large Fission Surface Power (FSP) concept. Breaking the power system up into modular packages creates new operational opportunities, with benefits ranging from reduced lander self-sufficiency for power, to extending the exploration distance from a single landing site. Although a large FSP trades well for operational complexity, a modular approach potentially allows Program Managers more flexibility to absorb late mission changes with less schedule or mass risk, better supports small precursor missions, and allows a program to slowly build up mission capability over time. A number of Kilopower disadvantages-and mitigation strategies-were also explored.

  10. Integrated Surface/subsurface flow modeling in PFLOTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Painter, Scott L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Understanding soil water, groundwater, and shallow surface water dynamics as an integrated hydrological system is critical for understanding the Earth’s critical zone, the thin outer layer at our planet’s surface where vegetation, soil, rock, and gases interact to regulate the environment. Computational tools that take this view of soil moisture and shallow surface flows as a single integrated system are typically referred to as integrated surface/subsurface hydrology models. We extend the open-source, highly parallel, subsurface flow and reactive transport simulator PFLOTRAN to accommodate surface flows. In contrast to most previous implementations, we do not represent a distinct surface system. Instead, the vertical gradient in hydraulic head at the land surface is neglected, which allows the surface flow system to be eliminated and incorporated directly into the subsurface system. This tight coupling approach leads to a robust capability and also greatly simplifies implementation in existing subsurface simulators such as PFLOTRAN. Successful comparisons to independent numerical solutions build confidence in the approximation and implementation. Example simulations of the Walker Branch and East Fork Poplar Creek watersheds near Oak Ridge, Tennessee demonstrate the robustness of the approach in geometrically complex applications. The lack of a robust integrated surface/subsurface hydrology capability had been a barrier to PFLOTRAN’s use in critical zone studies. This work addresses that capability gap, thus enabling PFLOTRAN as a community platform for building integrated models of the critical zone.

  11. Evolutionary algorithms approach for integrated bioenergy supply chains optimization

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji

    2009-01-01

    In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.

  12. Research on the target coverage algorithms for 3D curved surface

    International Nuclear Information System (INIS)

    Sun, Shunyuan; Sun, Li; Chen, Shu

    2016-01-01

    To solve the target covering problems in three-dimensional space, putting forward a deployment strategies of the target points innovatively, and referencing to the differential evolution (DE) algorithm to optimize the location coordinates of the sensor nodes to realize coverage of all the target points in 3-D surface with minimal sensor nodes. Firstly, building the three-dimensional perception model of sensor nodes, and putting forward to the blind area existing in the process of the sensor nodes sensing the target points in 3-D surface innovatively, then proving the feasibility of solving the target coverage problems in 3-D surface with DE algorithm theoretically, and reflecting the fault tolerance of the algorithm.

  13. On integral and differential representations of Jordan chains and the confluent supersymmetry algorithm

    International Nuclear Information System (INIS)

    Contreras-Astorga, Alonso; Schulze-Halberg, Axel

    2015-01-01

    We construct a relationship between integral and differential representation of second-order Jordan chains. Conditions to obtain regular potentials through the confluent supersymmetry algorithm when working with the differential representation are obtained using this relationship. Furthermore, it is used to find normalization constants of wave functions of quantum systems that feature energy-dependent potentials. Additionally, this relationship is used to express certain integrals involving functions that are solution of Schrödinger equations through derivatives. (paper)

  14. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  15. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    Science.gov (United States)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  16. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  17. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  18. Algorithms for singularities and real structures of weak Del Pezzo surfaces

    KAUST Repository

    Lubbes, Niels

    2014-08-01

    In this paper, we consider the classification of singularities [P. Du Val, On isolated singularities of surfaces which do not affect the conditions of adjunction. I, II, III, Proc. Camb. Philos. Soc. 30 (1934) 453-491] and real structures [C. T. C. Wall, Real forms of smooth del Pezzo surfaces, J. Reine Angew. Math. 1987(375/376) (1987) 47-66, ISSN 0075-4102] of weak Del Pezzo surfaces from an algorithmic point of view. It is well-known that the singularities of weak Del Pezzo surfaces correspond to root subsystems. We present an algorithm which computes the classification of these root subsystems. We represent equivalence classes of root subsystems by unique labels. These labels allow us to construct examples of weak Del Pezzo surfaces with the corresponding singularity configuration. Equivalence classes of real structures of weak Del Pezzo surfaces are also represented by root subsystems. We present an algorithm which computes the classification of real structures. This leads to an alternative proof of the known classification for Del Pezzo surfaces and extends this classification to singular weak Del Pezzo surfaces. As an application we classify families of real conics on cyclides. © World Scientific Publishing Company.

  19. Energetic integration of discontinuous processes by means of genetic algorithms, GABSOBHIN; Integration energetique de procedes discontinus a l'aide d'algorithmes genetiques, GABSOBHIN

    Energy Technology Data Exchange (ETDEWEB)

    Krummenacher, P.; Renaud, B.; Marechal, F.; Favrat, D.

    2001-07-01

    This report presents a new methodological approach for the optimal design of energy-integrated batch processes. The main emphasis is put on indirect and, to some extend, on direct heat exchange networks with the possibility of introducing closed or open storage systems. The study demonstrates the feasibility of optimising with genetic algorithms while highlighting the pros and cons of this type of approach. The study shows that the resolution of such problems should preferably be done in several steps to better target the expected solutions. Demonstration is made that in spite of relatively large computer times (on PCs) the use of genetic algorithm allows the consideration of both continuous decision variables (size, operational rating of equipment, etc.) and integer variables (related to the structure at design and during operation). Comparison of two optimisation strategies is shown with a preference for a two-steps optimisation scheme. One of the strengths of genetic algorithms is the capacity to accommodate heuristic rules, which can be introduced in the model. However, a rigorous modelling strategy is advocated to improve robustness and adequate coding of the decision variables. The practical aspects of the research work are converted into a software developed with MATLAB to solve the energy integration of batch processes with a reasonable number of closed or open stores. This software includes the model of superstructures, including the heat exchangers and the storage alternatives, as well as the link to the Struggle algorithm developed at MIT via a dedicated new interface. The package also includes a user-friendly pre-processing using EXCEL, which is to facilitate to application to other similar industrial problems. These software developments have been validated both on an academic and on an industrial type of problems. (author)

  20. Comparison and Integration of Target Prediction Algorithms for microRNA Studies

    Directory of Open Access Journals (Sweden)

    Zhang Yanju

    2010-12-01

    Full Text Available microRNAs are short RNA fragments that have the capacity of regulating hundreds of target gene expression. Currently, due to lack of high-throughput experimental methods for miRNA target identification, a collection of computational target prediction approaches have been developed. However, these approaches deal with different features or factors are weighted differently resulting in diverse range of predictions. The prediction accuracy remains uncertain. In this paper, three commonly used target prediction algorithms are evaluated and further integrated using algorithm combination, ranking aggregation and Bayesian Network classification. Our results revealed that each individual prediction algorithm displays its advantages as was shown on different test data sets. Among different integration strategies, the application of Bayesian Network classifier on the features calculated from multiple prediction methods significantly improved target prediction accuracy.

  1. Determination of the Main Influencing Factors on Road Fatalities Using an Integrated Neuro-Fuzzy Algorithm

    Directory of Open Access Journals (Sweden)

    Amir Masoud Rahimi

    Full Text Available Abstract This paper proposed an integrated algorithm of neuro-fuzzy techniques to examine the complex impact of socio-technical influencing factors on road fatalities. The proposed algorithm could handle complexity, non-linearity and fuzziness in the modeling environment due to its mechanism. The Neuro-fuzzy algorithm for determination of the potential influencing factors on road fatalities consisted of two phases. In the first phase, intelligent techniques are compared for their improved accuracy in predicting fatality rate with respect to some socio-technical influencing factors. Then in the second phase, sensitivity analysis is performed to calculate the pure effect on fatality rate of the potential influencing factors. The applicability and usefulness of the proposed algorithm is illustrated using the data in Iran provincial road transportation systems in the time period 2012-2014. Results show that road design improvement, number of trips, and number of passengers are the most influencing factors on provincial road fatality rate.

  2. Development of Liquid Capacity Measuring Algorithm Based on Data Integration from Multiple Sensors

    Directory of Open Access Journals (Sweden)

    Kiwoong Park

    2016-01-01

    Full Text Available This research proposes an algorithm using a process of integrating data from multiple sensors to measure the liquid capacity in real time regardless of the position of the liquid tank. The algorithm for measuring the capacity was created with a complementary filter using a Kalman filter in order to revise the level sensor data and IMU sensor data. The measuring precision of the proposed algorithm was assessed through repetitive experiments by varying the liquid capacity and the rotation angle of the liquid tank. The measurements of the capacity within the liquid tank were precise, even when the liquid tank was rotated. Using the proposed algorithm, one can obtain highly precise measurements, and it is affordable since an existing level sensor is used.

  3. A conservative quaternion-based time integration algorithm for rigid body rotations with implicit constraints

    DEFF Research Database (Denmark)

    Nielsen, Martin Bjerre; Krenk, Steen

    2012-01-01

    A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...

  4. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    Science.gov (United States)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  5. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  7. Optimization of surface integrity in dry hard turning using RSM

    Indian Academy of Sciences (India)

    Abstract. This paper investigates the effect of different cutting parameters (cutting speed, feed rate, and depth of cut) on surface integrity defined in terms of surface roughness and microhardness in dry hard turning process. The workpiece material used was hardened alloy steel AISI 52100 and it was machined on a CNC ...

  8. Optimization of surface integrity in dry hard turning using RSM

    Indian Academy of Sciences (India)

    This paper investigates the effect of different cutting parameters (cutting speed, feed rate, and depth of cut) on surface integrity defined in terms of surface roughness and microhardness in dry hard turning process. The workpiece material used was hardened alloy steel AISI 52100 and it was machined on a CNC lathe with ...

  9. Surface free energy for systems with integrable boundary conditions

    International Nuclear Information System (INIS)

    Goehmann, Frank; Bortz, Michael; Frahm, Holger

    2005-01-01

    The surface free energy is the difference between the free energies for a system with open boundary conditions and the same system with periodic boundary conditions. We use the quantum transfer matrix formalism to express the surface free energy in the thermodynamic limit of systems with integrable boundary conditions as a matrix element of certain projection operators. Specializing to the XXZ spin-1/2 chain we introduce a novel 'finite temperature boundary operator' which characterizes the thermodynamical properties of surfaces related to integrable boundary conditions

  10. Scattering of surface waves modelled by the integral equation method

    Science.gov (United States)

    Lu, Laiyu; Maupin, Valerie; Zeng, Rongsheng; Ding, Zhifeng

    2008-09-01

    The integral equation method is used to model the propagation of surface waves in 3-D structures. The wavefield is represented by the Fredholm integral equation, and the scattered surface waves are calculated by solving the integral equation numerically. The integration of the Green's function elements is given analytically by treating the singularity of the Hankel function at R = 0, based on the proper expression of the Green's function and the addition theorem of the Hankel function. No far-field and Born approximation is made. We investigate the scattering of surface waves propagating in layered reference models imbedding a heterogeneity with different density, as well as Lamé constant contrasts, both in frequency and time domains, for incident plane waves and point sources.

  11. Adaptive Integration of the Compressed Algorithm of CS and NPC for the ECG Signal Compressed Algorithm in VLSI Implementation

    Directory of Open Access Journals (Sweden)

    Yun-Hua Tseng

    2017-10-01

    Full Text Available Compressed sensing (CS is a promising approach to the compression and reconstruction of electrocardiogram (ECG signals. It has been shown that following reconstruction, most of the changes between the original and reconstructed signals are distributed in the Q, R, and S waves (QRS region. Furthermore, any increase in the compression ratio tends to increase the magnitude of the change. This paper presents a novel approach integrating the near-precise compressed (NPC and CS algorithms. The simulation results presented notable improvements in signal-to-noise ratio (SNR and compression ratio (CR. The efficacy of this approach was verified by fabricating a highly efficient low-cost chip using the Taiwan Semiconductor Manufacturing Company’s (TSMC 0.18-μm Complementary Metal-Oxide-Semiconductor (CMOS technology. The proposed core has an operating frequency of 60 MHz and gate counts of 2.69 K.

  12. Surface deformation recovery algorithm for reflector antennas based on geometric optics.

    Science.gov (United States)

    Huang, Jianhui; Jin, Huiliang; Ye, Qian; Meng, Guoxiang

    2017-10-02

    Surface deformations of large reflector antennas highly depend on elevation angle. This paper adopted a scheme with the ability to conduct measurement at any elevation angle: carrying an emission source, an unmanned aerial vehicle (UAV) scans the antenna on a near-field plane, meanwhile the antenna stays stationary. Near-field amplitude is measured in the scheme. To recover the deformation from the measured amplitude, this paper proposed a novel algorithm by deriving the deformation-amplitude equation, which reveals the relation between the surface deformation and the near-field amplitude. By the algorithm, a precise deformation recovery can be reached at a low frequency (<1GHz) through single near-field amplitude. Simulation results showed the high accuracy and adaptability of the algorithm.

  13. Algorithm for Automated Mapping of Land Surface Temperature Using LANDSAT 8 Satellite Data

    OpenAIRE

    Ugur Avdan; Gordana Jovanovska

    2016-01-01

    Land surface temperature is an important factor in many areas, such as global climate change, hydrological, geo-/biophysical, and urban land use/land cover. As the latest launched satellite from the LANDSAT family, LANDSAT 8 has opened new possibilities for understanding the events on the Earth with remote sensing. This study presents an algorithm for the automatic mapping of land surface temperature from LANDSAT 8 data. The tool was developed using the LANDSAT 8 thermal infrared sensor Band ...

  14. A comparison of two open source LiDAR surface classification algorithms

    Science.gov (United States)

    With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...

  15. A comparison of two open source LiDAR surface classification algorithms

    Science.gov (United States)

    Wade T. Tinkham; Hongyu Huang; Alistair M.S. Smith; Rupesh Shrestha; Michael J. Falkowski; Andrew T. Hudak; Timothy E. Link; Nancy F. Glenn; Danny G. Marks

    2011-01-01

    With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results....

  16. Algorithm for Automated Mapping of Land Surface Temperature Using LANDSAT 8 Satellite Data

    Directory of Open Access Journals (Sweden)

    Ugur Avdan

    2016-01-01

    Full Text Available Land surface temperature is an important factor in many areas, such as global climate change, hydrological, geo-/biophysical, and urban land use/land cover. As the latest launched satellite from the LANDSAT family, LANDSAT 8 has opened new possibilities for understanding the events on the Earth with remote sensing. This study presents an algorithm for the automatic mapping of land surface temperature from LANDSAT 8 data. The tool was developed using the LANDSAT 8 thermal infrared sensor Band 10 data. Different methods and formulas were used in the algorithm that successfully retrieves the land surface temperature to help us study the thermal environment of the ground surface. To verify the algorithm, the land surface temperature and the near-air temperature were compared. The results showed that, for the first case, the standard deviation was 2.4°C, and for the second case, it was 2.7°C. For future studies, the tool should be refined with in situ measurements of land surface temperature.

  17. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  18. Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms

    Directory of Open Access Journals (Sweden)

    Milton Isaya Ndossi

    2016-12-01

    Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.

  19. Artificial immune algorithm implementation for optimized multi-axis sculptured surface CNC machining

    Science.gov (United States)

    Fountas, N. A.; Kechagias, J. D.; Vaxevanidis, N. M.

    2016-11-01

    This paper presents the results obtained by the implementation of an artificial immune algorithm to optimize standard multi-axis tool-paths applied to machine free-form surfaces. The investigation for its applicability was based on a full factorial experimental design addressing the two additional axes for tool inclination as independent variables whilst a multi-objective response was formulated by taking into consideration surface deviation and tool path time; objectives assessed directly from computer-aided manufacturing environment A standard sculptured part was developed by scratch considering its benchmark specifications and a cutting-edge surface machining tool-path was applied to study the effects of the pattern formulated when dynamically inclining a toroidal end-mill and guiding it towards the feed direction under fixed lead and tilt inclination angles. The results obtained form the series of the experiments were used for the fitness function creation the algorithm was about to sequentially evaluate. It was found that the artificial immune algorithm employed has the ability of attaining optimal values for inclination angles facilitating thus the complexity of such manufacturing process and ensuring full potentials in multi-axis machining modelling operations for producing enhanced CNC manufacturing programs. Results suggested that the proposed algorithm implementation may reduce the mean experimental objective value to 51.5%

  20. Integration of genetic algorithm, computer simulation and design of experiments for forecasting electrical energy consumption

    International Nuclear Information System (INIS)

    Azadeh, A.; Tarverdian, S.

    2007-01-01

    This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on genetic algorithm (GA), computer simulation and design of experiments using stochastic procedures. First, time-series model is developed as a benchmark for GA and simulation. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The GA and simulated-based GA models are then developed by the selected time-series model. Therefore, there are four treatments to be considered in analysis of variance (ANOVA) which are actual data, time series, GA and simulated-based GA. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan Multiple Range Test (DMRT) method of paired comparison is used to select the optimum model, which could be time series, GA or simulated-based GA. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best-fit GA model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that GA always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to the proposed algorithm

  1. An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2016-01-01

    Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.

  2. Energy Management through Heat Integration: a Simple Algorithmic Approach for Introducing Pinch Analysis

    Directory of Open Access Journals (Sweden)

    Nasser A. Al-Azri

    2015-12-01

    Full Text Available Pinch analysis is a methodology used for minimizing energy and material consumption in engineering processes. It features the identification of the pinch point and minimum external resources. Two common established approaches are used to identify these features: the graphical approach and the algebraic method, which are time-consuming and susceptible to human and calculation errors when used for a large number of process streams. This paper presents an algorithmic procedure to heat integration based on the algebraic approach. The algorithmic procedure is explained in a didactical manner to introduce pinch analysis for students and novice researchers in the field. Matlab code is presented, which is also intended for developing a Matlab toolbox for process integration.

  3. An Adaptive Unscented Kalman Filtering Algorithm for MEMS/GPS Integrated Navigation Systems

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2014-01-01

    Full Text Available MEMS/GPS integrated navigation system has been widely used for land-vehicle navigation. This system exhibits large errors because of its nonlinear model and uncertain noise statistic characteristics. Based on the principles of the adaptive Kalman filtering (AKF and unscented Kalman filtering (AUKF algorithms, an adaptive unscented Kalman filtering (AUKF algorithm is proposed. By using noise statistic estimator, the uncertain noise characteristics could be online estimated to adaptively compensate the time-varying noise characteristics. Employing the adaptive filtering principle into UKF, the nonlinearity of system can be restrained. Simulations are conducted for MEMS/GPS integrated navigation system. The results show that the performance of estimation is improved by the AUKF approach compared with both conventional AKF and UKF.

  4. Surfaces immersed in Lie algebras associated with elliptic integrals

    International Nuclear Information System (INIS)

    Grundland, A M; Post, S

    2012-01-01

    The objective of this work is to adapt the Fokas–Gel’fand immersion formula to ordinary differential equations written in the Lax representation. The formalism of generalized vector fields and their prolongation structure is employed to establish necessary and sufficient conditions for the existence and integration of immersion functions for surfaces in Lie algebras. As an example, a class of second-order, integrable, ordinary differential equations is considered and the most general solutions for the wavefunctions of the linear spectral problem are found. Several explicit examples of surfaces associated with Jacobian and P-Weierstrass elliptic functions are presented. (paper)

  5. An efficient algorithm for integrated task sequencing and path planning for robotic remote laser welding

    Science.gov (United States)

    Gorbenko, Anna; Popov, Vladimir

    2017-07-01

    Different planning problems for robotic remote laser welding are of considerable interest. In this paper, we consider the problem of integrated task sequencing and path planning for robotic remote laser welding. We propose an efficient approach to solve the problem. In particular, we consider an explicit reduction from the decision version of the problem to the satisfiability problem. We present the results of computational experiments for different satisfiability algorithms.

  6. Analysis of Hand-Held Phones Using the Finite Integration Algorithm

    Directory of Open Access Journals (Sweden)

    R. Dlouhy

    1996-12-01

    Full Text Available Two different hand-held phones operating at 1.8 GHz are numerically analyzed, applying a field calculation program MAFIA (MAxwell's equations using the Finite Integration Algorithm. One phone contains a Back-Mounted Microstrip Double Patch Antenna (BMMDPA, the other, for comparison a conventional monopole. Realistic models of the the handset, the head and the hand are used to gain a detailed understanding of the antenna properties, as well as of the antenna-tissue interaction.

  7. Integrable systems twistors, loop groups, and Riemann surfaces

    CERN Document Server

    Hitchin, NJ; Ward, RS

    2013-01-01

    This textbook is designed to give graduate students an understanding of integrable systems via the study of Riemann surfaces, loop groups, and twistors. The book has its origins in a series of lecture courses given by the authors, all of whom are internationally known mathematicians and renowned expositors. It is written in an accessible and informal style, and fills a gap in the existing literature. The introduction by Nigel Hitchin addresses the meaning of integrability: how do werecognize an integrable system? His own contribution then develops connections with algebraic geometry, and inclu

  8. Fast centroid algorithm for determining the surface plasmon resonance angle using the fixed-boundary method

    International Nuclear Information System (INIS)

    Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling

    2011-01-01

    To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted

  9. A general rough-surface inversion algorithm: Theory and application to SAR data

    Science.gov (United States)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  10. All-Weather Sounding of Moisture and Temperature From Microwave Sensors Using a Coupled Surface/Atmosphere Inversion Algorithm

    Science.gov (United States)

    Boukabara, S. A.; Garrett, K.

    2014-12-01

    A one-dimensional variational retrieval system has been developed, capable of producing temperature and water vapor profiles in clear, cloudy and precipitating conditions. The algorithm, known as the Microwave Integrated Retrieval System (MiRS), is currently running operationally at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS), and is applied to a variety of data from the AMSU-A/MHS sensors on board the NOAA-18, NOAA-19, and MetOp-A/B polar satellite platforms, as well as SSMI/S on board both DMSP F-16 and F18, and from the NPP ATMS sensor. MiRS inverts microwave brightness temperatures into atmospheric temperature and water vapor profiles, along with hydrometeors and surface parameters, simultaneously. This atmosphere/surface coupled inversion allows for more accurate retrievals in the lower tropospheric layers by accounting for the surface emissivity impact on the measurements. It also allows the inversion of the soundings in all-weather conditions thanks to the incorporation of the hydrometeors parameters in the inverted state vector as well as to the inclusion of the emissivity in the same state vector, which is accounted for dynamically for the highly variable surface conditions found under precipitating atmospheres. The inversion is constrained in precipitating conditions by the inclusion of covariances for hydrometeors, to take advantage of the natural correlations that exist between temperature and water vapor with liquid and ice cloud along with rain water. In this study, we present a full assessment of temperature and water vapor retrieval performances in all-weather conditions and over all surface types (ocean, sea-ice, land, and snow) using matchups with radiosonde as well as Numerical Weather Prediction and other satellite retrieval algorithms as references. An emphasis is placed on retrievals in cloudy and precipitating atmospheres, including extreme weather events

  11. SARDA: An Integrated Concept for Airport Surface Operations Management

    Science.gov (United States)

    Gupta, Gautam; Hoang, Ty; Jung, Yoon Chul

    2013-01-01

    The Spot and Runway Departure Advisor (SARDA) is an integrated decision support tool for airlines and air traffic control tower enabling surface collaborative decision making (CDM) and departure metering in order to enhance efficiency of surface operations at congested airports. The presentation describes the concept and architecture of the SARDA as a CDM tool, and the results from a human-in-the-loop simulation of the tool conducted in 2012 at the FutureFlight Central, the tower simulation facility. Also, presented is the current activities and future plan for SARDA development. The presentation was given at the meeting with the FAA senior advisor of the Surface Operations Office.

  12. Integrated Optical Components Utilizing Long-Range Surface Plasmon Polaritons

    DEFF Research Database (Denmark)

    Boltasseva, Alexandra; Nikolajsen, Thomas; Leosson, Kristjan

    2005-01-01

    New optical waveguide technology for integrated optics, based on propagation of long-range surface plasmon polaritons (LR-SPPs) along metal stripes embedded in dielectric, is presented. Guiding and routing of electromagnetic radiation along nanometer-thin and micrometer-wide gold stripes embedded...

  13. Process optimization of rolling for zincked sheet technology using response surface methodology and genetic algorithm

    Science.gov (United States)

    Ji, Liang-Bo; Chen, Fang

    2017-07-01

    Numerical simulation and intelligent optimization technology were adopted for rolling and extrusion of zincked sheet. By response surface methodology (RSM), genetic algorithm (GA) and data processing technology, an efficient optimization of process parameters for rolling of zincked sheet was investigated. The influence trend of roller gap, rolling speed and friction factor effects on reduction rate and plate shortening rate were analyzed firstly. Then a predictive response surface model for comprehensive quality index of part was created using RSM. Simulated and predicted values were compared. Through genetic algorithm method, the optimal process parameters for the forming of rolling were solved. They were verified and the optimum process parameters of rolling were obtained. It is feasible and effective.

  14. Real-time intelligent pattern recognition algorithm for surface EMG signals

    Directory of Open Access Journals (Sweden)

    Jahed Mehran

    2007-12-01

    Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal

  15. Review of singular potential integrals for method of moments solutions of surface integral equations

    Directory of Open Access Journals (Sweden)

    A. Tzoulis

    2004-01-01

    Full Text Available Accurate evaluation of singular potential integrals is essential for successful method of moments (MoM solutions of surface integral equations. In mixed potential formulations for metallic and dielectric scatterers, kernels with 1/R and r1/R singularities must be considered. Several techniques for the treatment of these singularities will be reviewed. The most common approach solves the MoM source integrals analytically for specific observation points, thus regularizing the integral. However, in the case of r1/R a logarithmic singularity remains for which numerical evaluation of the testing integral is still difficult. A recently by Yl¨a-Oijala and Taskinen proposed remedy to this issue is discussed and evaluated within a hybrid finite element – boundary integral technique. Convergence results for the MoM coupling integrals are presented where also higher-order singularity extraction is considered.

  16. Analysis of decision fusion algorithms in handling uncertainties for integrated health monitoring systems

    Science.gov (United States)

    Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza

    2012-06-01

    It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes

  17. CPU, GPU and FPGA Implementations of MALD: Ceramic Tile Surface Defects Detection Algorithm

    OpenAIRE

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko

    2014-01-01

    This paper addresses adjustments, implementation and performance comparison of the Moving Average with Local Difference (MALD) method for ceramic tile surface defects detection. Ceramic tile production process is completely autonomous, except the final stage where human eye is required for defects detection. Recent computational platform development and advances in machine vision provides us with several options for MALD algorithm implementation. In order to exploit the shortest execution tim...

  18. Nonlinear Filtering with IMM Algorithm for Ultra-Tight GPS/INS Integration

    Directory of Open Access Journals (Sweden)

    Dah-Jing Jwo

    2013-05-01

    Full Text Available Abstract This paper conducts a performance evaluation for the ultra-tight integration of a Global positioning system (GPS and an inertial navigation system (INS, using nonlinear filtering approaches with an interacting multiple model (IMM algorithm. An ultra-tight GPS/INS architecture involves the integration of in-phase and quadrature components from the correlator of a GPS receiver with INS data. An unscented Kalman filter (UKF, which employs a set of sigma points by deterministic sampling, avoids the error caused by linearization as in an extended Kalman filter (EKF. Based on the filter structural adaptation for describing various dynamic behaviours, the IMM nonlinear filtering provides an alternative for designing the adaptive filter in the ultra-tight GPS/INS integration. The use of IMM enables tuning of an appropriate value for the process of noise covariance so as to maintain good estimation accuracy and tracking capability. Two examples are provided to illustrate the effectiveness of the design and demonstrate the effective improvement in navigation estimation accuracy. A performance comparison among various filtering methods for ultra-tight integration of GPS and INS is also presented. The IMM based nonlinear filtering approach demonstrates the effectiveness of the algorithm for improved positioning performance.

  19. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    Science.gov (United States)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal

  20. Surface integrity analysis when milling ultrafine-grained steels

    Directory of Open Access Journals (Sweden)

    Alessandro Roger Rodrigues

    2012-02-01

    Full Text Available This paper quantifies the effects of milling conditions on surface integrity of ultrafine-grained steels. Cutting speed, feed rate and depth of cut were related to microhardness and microstructure of the workpiece beneath machined surface. Low-carbon alloyed steel with 10.8 µm (as-received and 1.7 µm (ultrafine grain sizes were end milled using the down-milling and dry condition in a CNC machining center. The results show ultrafine-grained workpiece preserves its surface integrity against cutting parameters more than the as-received material. Cutting speed increases the microhardness while depth of cut deepens the hardened layer of the as-received material. Also, deformations of microstructure following feed rate direction were observed in workpiece subsurface.

  1. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  2. An Efficient Surface Algorithm for Random-Particle Simulation of Vorticity and Heat Transport

    Science.gov (United States)

    Smith, P. A.; Stansby, P. K.

    1989-04-01

    A new surface algorithm has been incorporated into the random-vortex method for the simulation of 2-dimensional laminar flow, in which vortex particles are deleted rather than reflected as they cross a solid surface. This involves a modification to the strength and random walk of newly created vortex particles. Computations of the early stages of symmetric, impulsively started flow around a circular cylinder for a wide range of Reynolds numbers demonstrate that the number of vortices required for convergence is substantially reduced. The method has been further extended to accommodate forced convective heat transfer where temperature particles are created at a surface to satisfy the condition of constant surface temperature. Vortex and temperature particles are handled together throughout each time step. For long runs, in which a steady state is reached, comparison is made with some time-averaged experimental heat transfer data for Reynolds numbers up to a few hundred. A Karman vortex street occurs at the higher Reynolds numbers.

  3. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  4. Parareal algorithms with local time-integrators for time fractional differential equations

    Science.gov (United States)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  5. A parallel algorithm for solving the integral form of the discrete ordinates equations

    International Nuclear Information System (INIS)

    Zerr, R. J.; Azmy, Y. Y.

    2009-01-01

    The integral form of the discrete ordinates equations involves a system of equations that has a large, dense coefficient matrix. The serial construction methodology is presented and properties that affect the execution times to construct and solve the system are evaluated. Two approaches for massively parallel implementation of the solution algorithm are proposed and the current results of one of these are presented. The system of equations May be solved using two parallel solvers-block Jacobi and conjugate gradient. Results indicate that both methods can reduce overall wall-clock time for execution. The conjugate gradient solver exhibits better performance to compete with the traditional source iteration technique in terms of execution time and scalability. The parallel conjugate gradient method is synchronous, hence it does not increase the number of iterations for convergence compared to serial execution, and the efficiency of the algorithm demonstrates an apparent asymptotic decline. (authors)

  6. Processing Sliding Mosaic Mode Data with Modified Full-Aperture Imaging Algorithm Integrating Scalloping Correction

    Directory of Open Access Journals (Sweden)

    Zhao Tuan

    2016-10-01

    Full Text Available In this study, we present a modified full-aperture imaging algorithm that includes scalloping correction and spike suppression for sliding-Mosaic-mode Synthetic Aperture Radar (SAR. It is innovational to correct the azimuth beam-pattern weighting altered by radar antenna rotation in the azimuth during the deramping preprocessing operation. The main idea of spike suppression is to substitute zeros between bursts with linear-predicted data extrapolated from adjacent bursts to suppress spikes caused by multiburst processing. We also integrate scalloping correction for the sliding mode into this algorithm. Finally, experiments are performed using the C-band airborne SAR system with a maximum bandwidth of 200 MHz to validate the effectiveness of this approach.

  7. Antibacterial and bioactive nanostructured titanium surfaces for bone integration

    Science.gov (United States)

    Ferraris, S.; Venturello, A.; Miola, M.; Cochis, A.; Rimondini, L.; Spriano, S.

    2014-08-01

    An effective and physiological bone integration and absence of bacterial infection are essential for a successful orthopaedic or dental implant. A titanium surface able to actively promote bone bonding and avoid microbial colonization represents an extremely interesting challenge for these purposes. An innovative and patented surface treatment focused on these issues is described in the present paper. It is based on acid etching and subsequent controlled oxidation in hydrogen peroxide, enriched with silver ions. It has been applied to commercially pure titanium (Ti-cp) and alloy Ti6Al4V. The chemistry and morphology of the surfaces are modified by the treatment on a nanoscale: they show a thin oxide layer with porosity on the nanoscale and silver particles (few nanometers in diameter), embedded in it. These features are effective in order to obtain antibacterial and bioactive titanium surfaces.

  8. Energetics of oscillating lifting surfaces using integral conservation laws

    Science.gov (United States)

    Ahmadi, Ali R.; Widnall, Sheila E.

    1987-01-01

    The energetics of oscillating flexible lifting surfaces in two and three dimensions is calculated by the use of integral conservation laws in inviscid incompressible flow for general and harmonic transverse oscillations. Total thrust is calculated from the momentum theorem and energy loss rate due to vortex shedding in the wake from the principle of conservation of mechanical energy. Total power required to maintain the oscillations and hydrodynamic efficiency are also determined. In two dimensions, the results are obtained in closed form. In three dimensions, the distribution of vorticity on the lifting surface is also required as input to the calculations. Thus, unsteady lifting-surface theory must be used as well. The analysis is applicable to oscillating lifting surfaces of arbitrary planform, aspect ratio, and reduced frequency and does not require calculation of the leading-edge thrust.

  9. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-01-01

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431

  10. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Namju Jeon

    2016-12-01

    Full Text Available An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  11. Integrated Fault Diagnosis Algorithm for Motor Sensors of In-Wheel Independent Drive Electric Vehicles.

    Science.gov (United States)

    Jeon, Namju; Lee, Hyeongcheol

    2016-12-12

    An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.

  12. A Robust Inversion Algorithm for Surface Leaf and Soil Temperatures Using the Vegetation Clumping Index

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2017-07-01

    Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.

  13. Using subdivision surfaces and adaptive surface simplification algorithms for modeling chemical heterogeneities in geophysical flows

    Science.gov (United States)

    Schmalzl, JöRg; Loddoch, Alexander

    2003-09-01

    We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.

  14. MAPPING OF PLANETARY SURFACE AGE BASED ON CRATER STATISTICS OBTAINED BY AN AUTOMATIC DETECTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. L. Salih

    2016-06-01

    Full Text Available The analysis of the impact crater size-frequency distribution (CSFD is a well-established approach to the determination of the age of planetary surfaces. Classically, estimation of the CSFD is achieved by manual crater counting and size determination in spacecraft images, which, however, becomes very time-consuming for large surface areas and/or high image resolution. With increasing availability of high-resolution (nearly global image mosaics of planetary surfaces, a variety of automated methods for the detection of craters based on image data and/or topographic data have been developed. In this contribution a template-based crater detection algorithm is used which analyses image data acquired under known illumination conditions. Its results are used to establish the CSFD for the examined area, which is then used to estimate the absolute model age of the surface. The detection threshold of the automatic crater detection algorithm is calibrated based on a region with available manually determined CSFD such that the age inferred from the manual crater counts corresponds to the age inferred from the automatic crater detection results. With this detection threshold, the automatic crater detection algorithm can be applied to a much larger surface region around the calibration area. The proposed age estimation method is demonstrated for a Kaguya Terrain Camera image mosaic of 7.4 m per pixel resolution of the floor region of the lunar crater Tsiolkovsky, which consists of dark and flat mare basalt and has an area of nearly 10,000 km2. The region used for calibration, for which manual crater counts are available, has an area of 100 km2. In order to obtain a spatially resolved age map, CSFDs and surface ages are computed for overlapping quadratic regions of about 4.4 x 4.4 km2 size offset by a step width of 74 m. Our constructed surface age map of the floor of Tsiolkovsky shows age values of typically 3.2-3.3 Ga, while for small regions lower (down to

  15. A Simple and Universal Aerosol Retrieval Algorithm for Landsat Series Images Over Complex Surfaces

    Science.gov (United States)

    Wei, Jing; Huang, Bo; Sun, Lin; Zhang, Zhaoyang; Wang, Lunche; Bilal, Muhammad

    2017-12-01

    Operational aerosol optical depth (AOD) products are available at coarse spatial resolutions from several to tens of kilometers. These resolutions limit the application of these products for monitoring atmospheric pollutants at the city level. Therefore, a simple, universal, and high-resolution (30 m) Landsat aerosol retrieval algorithm over complex urban surfaces is developed. The surface reflectance is estimated from a combination of top of atmosphere reflectance at short-wave infrared (2.22 μm) and Landsat 4-7 surface reflectance climate data records over densely vegetated areas and bright areas. The aerosol type is determined using the historical aerosol optical properties derived from the local urban Aerosol Robotic Network (AERONET) site (Beijing). AERONET ground-based sun photometer AOD measurements from five sites located in urban and rural areas are obtained to validate the AOD retrievals. Terra MODerate resolution Imaging Spectrometer Collection (C) 6 AOD products (MOD04) including the dark target (DT), the deep blue (DB), and the combined DT and DB (DT&DB) retrievals at 10 km spatial resolution are obtained for comparison purposes. Validation results show that the Landsat AOD retrievals at a 30 m resolution are well correlated with the AERONET AOD measurements (R2 = 0.932) and that approximately 77.46% of the retrievals fall within the expected error with a low mean absolute error of 0.090 and a root-mean-square error of 0.126. Comparison results show that Landsat AOD retrievals are overall better and less biased than MOD04 AOD products, indicating that the new algorithm is robust and performs well in AOD retrieval over complex surfaces. The new algorithm can provide continuous and detailed spatial distributions of AOD during both low and high aerosol loadings.

  16. An algorithm for detecting Trichodesmium surface blooms in the South Western Tropical Pacific

    Directory of Open Access Journals (Sweden)

    Y. Dandonneau

    2011-12-01

    Full Text Available Trichodesmium, a major colonial cyanobacterial nitrogen fixer, forms large blooms in NO3-depleted tropical oceans and enhances CO2 sequestration by the ocean due to its ability to fix dissolved dinitrogen. Thus, its importance in C and N cycles requires better estimates of its distribution at basin to global scales. However, existing algorithms to detect them from satellite have not yet been successful in the South Western Tropical Pacific (SP. Here, a novel algorithm (TRICHOdesmium SATellite based on radiance anomaly spectra (RAS observed in SeaWiFS imagery, is used to detect Trichodesmium during the austral summertime in the SP (5° S–25° S 160° E–170° W. Selected pixels are characterized by a restricted range of parameters quantifying RAS spectra (e.g. slope, intercept, curvature. The fraction of valid (non-cloudy pixels identified as Trichodesmium surface blooms in the region is low (between 0.01 and 0.2 %, but is about 100 times higher than deduced from previous algorithms. At daily scales in the SP, this fraction represents a total ocean surface area varying from 16 to 48 km2 in Winter and from 200 to 1000 km2 in Summer (and at monthly scale, from 500 to 1000 km2 in Winter and from 3100 to 10 890 km2 in Summer with a maximum of 26 432 km2 in January 1999. The daily distribution of Trichodesmium surface accumulations in the SP detected by TRICHOSAT is presented for the period 1998–2010 which demonstrates that the number of selected pixels peaks in November–February each year, consistent with field observations. This approach was validated with in situ observations of Trichodesmium surface accumulations in the Melanesian archipelago around New Caledonia, Vanuatu and Fiji Islands for the same period.

  17. Certain integrable system on a space associated with a quantum search algorithm

    International Nuclear Information System (INIS)

    Uwano, Y.; Hino, H.; Ishiwatari, Y.

    2007-01-01

    On thinking up a Grover-type quantum search algorithm for an ordered tuple of multiqubit states, a gradient system associated with the negative von Neumann entropy is studied on the space of regular relative configurations of multiqubit states (SR 2 CMQ). The SR 2 CMQ emerges, through a geometric procedure, from the space of ordered tuples of multiqubit states for the quantum search. The aim of this paper is to give a brief report on the integrability of the gradient dynamical system together with quantum information geometry of the underlying space, SR 2 CMQ, of that system

  18. A Low-disturbance Diagnostic Function Integrated in the PV Arrays’ MPPT Algorithm

    DEFF Research Database (Denmark)

    Sera, Dezso; Mathe, Laszlo; Kerekes, Tamas

    2011-01-01

    This paper focuses on the estimation of series resistance changes for flat silicone PV panels or arrays during operation, without moving the operating point far away from the Maximum Power Point. The method is based on the measurement of the slope of the IV curve at a current level that differs...... from the short-circuit current with a fixed value, regardless of irradiance conditions. The method causes less power loss and power ripple due to measurements and it is easily integrated in a hill-climbing MPPT algorithm....

  19. Inevitable surface dependence of some operator products and integrability

    International Nuclear Information System (INIS)

    Shigemoto, Kazuyasu; Tanaka, Azuma; Taguchi, Yukio; Yamamoto, Kunio.

    1976-01-01

    In general even in local theory the operator products at the same space-time point must be considered as a limit of non-local products. It is natural to confine non-locality on a space-like surface. In this case some operator products with three or more constituents possess an inevitable and purely quantum-mechanical surface dependence. Taking the pion-nucleon system as an example, we explicitly calculate in the order of g 2 this kind of the surface dependence of the interaction Hamiltonian. In order to obtain a consistent theory, this surface is required to be identified with the space-like surface in the Tomonaga-Schwinger equation. Then the interaction Hamiltonian needs an additional, non-canonical and surface-dependent term, which can be derived uniquely from the canonical Hamiltonian. The integrability of the Tomonaga-Schwinger equation is proved by taking account of this surface dependence together with the gradient term in the equal-time commutator. (auth.)

  20. Application of genetic algorithms to integrated optimization of safety system availability

    International Nuclear Information System (INIS)

    Damaso, Vinicius Correa; Pereira, Claudio M.N.A.; Melo, Paulo F.F. Frutuoso e

    2005-01-01

    Experience gained on systems design, operation, and maintenance, with the increasing degree of complexity, and the growth of computational processing capacity, makes it possible the development of integrated optimization techniques, which take into account the interaction of all involved phases in systems operation. However, such a broad approach, which describes in an integrated way, the involved factors, from the conception of the structure to maintenance policies, makes the problem more complex. Generally, the original problem is cast into a simpler one, by imposing some degree of linearity, which allows for a more conventional treatment. The effect of this linear approach is a sensible increase in the number of variables and constraints to be treated. Another shortcoming is related to the fact that as long as the description of some parameters is allowed to undergo modifications, the resulting model becomes less realistic. This paper presents an integrated optimization method of system performance based on genetic algorithms. The aim is to maximize the benefits from operating a simplified system, where different features, like the structure itself, its design, reliability, redundancy allocation, test and maintenance actions scheduling, and costs, are simultaneously taken into account in an integrated way. The availability model treats the original problem without performing any transformation. The set of solutions generated allows a decision-making support for which budgetary and safety constraints must be considered. The results show that the integrated approach used for optimizing the system is much more convenient and should be applied to more complex systems. (author)

  1. Investigation of ALEGRA shock hydrocode algorithms using an exact free surface jet flow solution.

    Energy Technology Data Exchange (ETDEWEB)

    Hanks, Bradley Wright.; Robinson, Allen C

    2014-01-01

    Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.

  2. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series

    International Nuclear Information System (INIS)

    Quirk, Thomas J. IV

    2004-01-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  3. Novel meta-surface design synthesis via nature-inspired optimization algorithms

    Science.gov (United States)

    Bayraktar, Zikri

    Heuristic numerical optimization algorithms have been gaining interest over the years as the computational power of the digital computers increases at an unprecedented level every year. While mature techniques such as the Genetic Algorithm increase their application areas, researchers also try to come up with new algorithms by simply observing the highly tuned processes provided by the nature. In this dissertation, the well-known Genetic Algorithm (GA) will be utilized to tackle various novel electromagnetic optimization problems, along with parallel implementation of the Clonal Selection Algorithm (CLONALG) and newly introduced the Wind Driven Optimization (WDO) technique. The utility of the CLONALG parallelization and the efficiency of the WDO will be illustrated by applying them to multi-dimensional and multi-modal electromagnetics problems such as antenna design and metamaterial surface synthesis. One of the metamaterial application areas is the design synthesis of 90 degrees rotationally symmetric ultra-small unit cell artificial magnetic conducting (AMC) surfaces. AMCs are composite metallo-dielectric structures designed to behave as perfect magnetic conductors (PMC) over a certain frequency range, those exhibit a reflection coefficient magnitude of unity with an phase angle of zero degrees at the center of the band. The proposed designs consist of ultra small sized frequency selective surface (FSS) unit cells that are tightly packed and highly intertwined, yet achieve remarkable AMC band performance and field of view when compared to current state-of-the-art AMCs. In addition, planar double-sided AMC (DSAMC) structures are introduced and optimized as AMC ground planes for low profile antennas in composite platforms and separator slabs for vertical antenna applications. The proposed designs do not possess complete metallic ground planes, which makes them ideal for composite and multi-antenna systems. The versatility of the DSAMC slabs is also illustrated

  4. A Fair Resource Allocation Algorithm for Data and Energy Integrated Communication Networks

    Directory of Open Access Journals (Sweden)

    Qin Yu

    2016-01-01

    Full Text Available With the rapid advancement of wireless network technologies and the rapid increase in the number of mobile devices, mobile users (MUs have an increasing high demand to access the Internet with guaranteed quality-of-service (QoS. Data and energy integrated communication networks (DEINs are emerging as a new type of wireless networks that have the potential to simultaneously transfer wireless energy and information via the same base station (BS. This means that a physical BS is virtualized into two parts: one is transferring energy and the other is transferring information. The former is called virtual energy base station (eBS and the latter is named as data base station (dBS. One important issue in such setting is dynamic resource allocation. Here the resource concerned includes both power and time. In this paper, we propose a fair data-and-energy resource allocation algorithm for DEINs by jointly designing the downlink energy beamforming and a power-and-time allocation scheme, with the consideration of finite capacity batteries at MUs and power sensitivity of radio frequency (RF to direct current (DC conversion circuits. Simulation results demonstrate that our proposed algorithm outperforms the existing algorithms in terms of fairness, beamforming design, sensitivity, and average throughput.

  5. Propagation of waves from an arbitrary shaped surface-A generalization of the Fresnel diffraction integral

    Science.gov (United States)

    Feshchenko, R. M.; Vinogradov, A. V.; Artyukov, I. A.

    2018-04-01

    Using the method of Laplace transform the field amplitude in the paraxial approximation is found in the two-dimensional free space using initial values of the amplitude specified on an arbitrary shaped monotonic curve. The obtained amplitude depends on one a priori unknown function, which can be found from a Volterra first kind integral equation. In a special case of field amplitude specified on a concave parabolic curve the exact solution is derived. Both solutions can be used to study the light propagation from arbitrary surfaces including grazing incidence X-ray mirrors. They can find applications in the analysis of coherent imaging problems of X-ray optics, in phase retrieval algorithms as well as in inverse problems in the cases when the initial field amplitude is sought on a curved surface.

  6. Modelling the Influence of Ground Surface Relief on Electric Sounding Curves Using the Integral Equations Method

    Directory of Open Access Journals (Sweden)

    Balgaisha Mukanova

    2017-01-01

    Full Text Available The problem of electrical sounding of a medium with ground surface relief is modelled using the integral equations method. This numerical method is based on the triangulation of the computational domain, which is adapted to the shape of the relief and the measuring line. The numerical algorithm is tested by comparing the results with the known solution for horizontally layered media with two layers. Calculations are also performed to verify the fulfilment of the “reciprocity principle” for the 4-electrode installations in our numerical model. Simulations are then performed for a two-layered medium with a surface relief. The quantitative influences of the relief, the resistivity ratios of the contacting media, and the depth of the second layer on the apparent resistivity curves are established.

  7. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  8. An algorithm for three-dimensional Monte-Carlo simulation of charge distribution at biofunctionalized surfaces

    KAUST Repository

    Bulyha, Alena

    2011-01-01

    In this work, a Monte-Carlo algorithm in the constant-voltage ensemble for the calculation of 3d charge concentrations at charged surfaces functionalized with biomolecules is presented. The motivation for this work is the theoretical understanding of biofunctionalized surfaces in nanowire field-effect biosensors (BioFETs). This work provides the simulation capability for the boundary layer that is crucial in the detection mechanism of these sensors; slight changes in the charge concentration in the boundary layer upon binding of analyte molecules modulate the conductance of nanowire transducers. The simulation of biofunctionalized surfaces poses special requirements on the Monte-Carlo simulations and these are addressed by the algorithm. The constant-voltage ensemble enables us to include the right boundary conditions; the dna strands can be rotated with respect to the surface; and several molecules can be placed in a single simulation box to achieve good statistics in the case of low ionic concentrations relevant in experiments. Simulation results are presented for the leading example of surfaces functionalized with pna and with single- and double-stranded dna in a sodium-chloride electrolyte. These quantitative results make it possible to quantify the screening of the biomolecule charge due to the counter-ions around the biomolecules and the electrical double layer. The resulting concentration profiles show a three-layer structure and non-trivial interactions between the electric double layer and the counter-ions. The numerical results are also important as a reference for the development of simpler screening models. © 2011 The Royal Society of Chemistry.

  9. Homogenisation algorithm skill testing with synthetic global benchmarks for the International Surface Temperature Initiative

    Science.gov (United States)

    Willet, Katherine; Venema, Victor; Williams, Claude; Aguilar, Enric; joliffe, Ian; Alexander, Lisa; Vincent, Lucie; Lund, Robert; Menne, Matt; Thorne, Peter; Auchmann, Renate; Warren, Rachel; Bronniman, Stefan; Thorarinsdotir, Thordis; Easterbrook, Steve; Gallagher, Colin; Lopardo, Giuseppina; Hausfather, Zeke; Berry, David

    2015-04-01

    Our surface temperature data are good enough to give us confidence that the world has warmed since 1880. However, they are not perfect - we cannot be precise in the amount of warming for the globe and especially for small regions or specific locations. Inhomogeneity (non-climate changes to the station record) is a major problem. While progress in detection of, and adjustment for inhomogeneities is continually advancing, monitoring effectiveness on large networks and gauging respective improvements in climate data quality is non-trivial. There is currently no internationally recognised means of robustly assessing the effectiveness of homogenisation methods on real data - and thus, the inhomogeneity uncertainty in those data. Here I present the work of the International Surface Temperature Initiative (ISTI; www.surfacetemperatures.org) Benchmarking working group. The aim is to quantify homogenisation algorithm skill on the global scale against realistic benchmarks. This involves the creation of synthetic worlds of surface temperature data, deliberate contamination of these with known errors and then assessment of the ability of homogenisation algorithms to detect and remove these errors. The ultimate aim is threefold: quantifying uncertainties in surface temperature data; enabling more meaningful product intercomparison; and improving homogenisation methods. There are five components work: 1. Create 30000 synthetic benchmark stations that look and feel like the real global temperature network, but do not contain any inhomogeneities: analog clean-worlds. 2. Design a set of error models which mimic the main types of inhomogeneities found in practice, and combined them with the analog clean-worlds to give analog error-worlds. 3. Engage with dataset creators to run their homogenisation algorithms blind on the analog error-world stations as they have done with the real data. 4. Design an assessment framework to gauge the degree to which analog error-worlds are returned to

  10. Online identification algorithms for integrated dielectric electroactive polymer sensors and self-sensing concepts

    International Nuclear Information System (INIS)

    Hoffstadt, Thorben; Griese, Martin; Maas, Jürgen

    2014-01-01

    Transducers based on dielectric electroactive polymers (DEAP) use electrostatic pressure to convert electric energy into strain energy or vice versa. Besides this, they are also designed for sensor applications in monitoring the actual stretch state on the basis of the deformation dependent capacitive–resistive behavior of the DEAP. In order to enable an efficient and proper closed loop control operation of these transducers, e.g. in positioning or energy harvesting applications, on the one hand, sensors based on DEAP material can be integrated into the transducers and evaluated externally, and on the other hand, the transducer itself can be used as a sensor, also in terms of self-sensing. For this purpose the characteristic electrical behavior of the transducer has to be evaluated in order to determine the mechanical state. Also, adequate online identification algorithms with sufficient accuracy and dynamics are required, independent from the sensor concept utilized, in order to determine the electrical DEAP parameters in real time. Therefore, in this contribution, algorithms are developed in the frequency domain for identifications of the capacitance as well as the electrode and polymer resistance of a DEAP, which are validated by measurements. These algorithms are designed for self-sensing applications, especially if the power electronics utilized is operated at a constant switching frequency, and parasitic harmonic oscillations are induced besides the desired DC value. These oscillations can be used for the online identification, so an additional superimposed excitation is no longer necessary. For this purpose a dual active bridge (DAB) is introduced to drive the DEAP transducer. The capabilities of the real-time identification algorithm in combination with the DAB are presented in detail and discussed, finally. (paper)

  11. Time integration algorithms for the two-dimensional Euler equations on unstructured meshes

    Science.gov (United States)

    Slack, David C.; Whitaker, D. L.; Walters, Robert W.

    1994-01-01

    Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.

  12. Locating Critical Circular and Unconstrained Failure Surface in Slope Stability Analysis with Tailored Genetic Algorithm

    Science.gov (United States)

    Pasik, Tomasz; van der Meij, Raymond

    2017-12-01

    This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.

  13. Moderate Resolution Imaging Spectroradiometer (MODIS) MOD21 Land Surface Temperature and Emissivity Algorithm Theoretical Basis Document

    Science.gov (United States)

    Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.

    2016-01-01

    This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.

  14. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  15. Use of surface electromyography in phonation studies: an integrative review

    Science.gov (United States)

    Balata, Patricia Maria Mendes; Silva, Hilton Justino da; Moraes, Kyvia Juliana Rocha de; Pernambuco, Leandro de Araújo; Moraes, Sílvia Regina Arruda de

    2013-01-01

    Summary Introduction: Surface electromyography has been used to assess the extrinsic laryngeal muscles during chewing and swallowing, but there have been few studies assessing these muscles during phonation. Objective: To investigate the current state of knowledge regarding the use of surface electromyography for evaluation of the electrical activity of the extrinsic muscles of the larynx during phonation by means of an integrative review. Method: We searched for articles and other papers in the PubMed, Medline/Bireme, and Scielo databases that were published between 1980 and 2012, by using the following descriptors: surface electromyography and voice, surface electromyography and phonation, and surface electromyography and dysphonia. The articles were selectedon the basis ofinclusion and exclusion criteria. Data Synthesis: This was carried out with a cross critical matrix. We selected 27 papers,i.e., 24 articles and 3 theses. The studies differed methodologically with regards to sample size and investigation techniques, making it difficult to compare them, but showed differences in electrical activity between the studied groups (dysphonicsubjects, non-dysphonicsubjects, singers, and others). Conclusion: Electromyography has clinical applicability when technical precautions with respect to application and analysis are obeyed. However, it is necessary to adopt a universal system of assessment tasks and related measurement techniques to allow comparisons between studies. PMID:25992030

  16. Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm

    Directory of Open Access Journals (Sweden)

    Offer Rozenstein

    2014-03-01

    Full Text Available Land surface temperature (LST is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS. This paper presents an adjustment of the split window algorithm (SWA for TIRS that uses atmospheric transmittance and land surface emissivity (LSE as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.

  17. Continuity Evaluation of Surface Retrieval Algorithms for ICESat-2/ATLAS Photon-Counting Laser Altimetry Data Products

    Science.gov (United States)

    Leigh, H. W.; Magruder, L. A.

    2016-12-01

    The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) mission team is developing algorithms to produce along-track and gridded science data products for a variety of surface types, including land ice, sea ice, land, and ocean. The ATL03 data product will contain geolocated photons for each of the six beams, and will incorporate geophysical corrections as well as preliminary photon classifications (signal vs. background). Higher level along-track and gridded data products for various surface types are processed using the ATL03 geolocated photons. The data processing schemes developed for each of the surface types rely on independent surface finding algorithms optimized to identify ground, canopy, water, or ice surfaces, as appropriate for a particular geographical location. In such cases where multiple surface types are present in close proximity to one another (e.g. land-ocean interface), multiple surface finding algorithms may be employed to extract surfaces along the same segment of a lidar profile. This study examines the effects on continuity of the various surface finding algorithms, specifically for littoral/coastal areas. These areas are important to the cryospheric, hydrologic, and biospheric communities in that continuity between the respective surface elevation products is required to fully utilize the information provided by ICESat-2 and its Advanced Topographic Laser Altimeter System (ATLAS) instrument.

  18. Surface integrity of solvent-challenged ormocer-matrix composite.

    Science.gov (United States)

    Cavalcante, Larissa Maria; Schneider, Luis Felipe J; Silikas, Nick; Watts, David C

    2011-02-01

    To investigate the surface integrity of solvent-challenged ormocer-matrix composites, photoactivated by different light exposure modes, through surface-hardness measurements at different periods of time; and to compare such behavior with dimethacrylate-based materials. One hundred percent ormocer-based matrix (experimental ormocer (ORM)), a commercial mixed dimethacrylate-ormocer-based matrix (Admira (ADR)) and two commercial dimethacrylate-based matrix composites (experimental controls, (Grandio (GRD) and Premise (PRE)) were evaluated. Disk specimens (4 mm × 2 mm) were prepared from each material and light-activated using either a standard (S) or soft-start (SS) light exposure protocol with an LED-curing unit. Top, irradiated surface Knoop hardness (KHN) was measured within the following experimental groups (n=5): Group 1: immediately after exposure; Group 2: after dry and dark storage, Group 3: after storage in distilled water, and Group 4: immersion in absolute ethanol. Hardness of Groups 2-4 were measured after 7 days storage. Immediate hardness values were submitted to Student's t-tests separately for each material. Hardness values after treatments were submitted to two-way ANOVA and Tukey's post hoc test to compare values among different storage media and light exposure mode protocols. Comparisons among materials were described using percentage of hardness change. Statistical testing was performed at a pre-set alpha of 0.05. Immediate hardness values were not affected by the light exposure mode, regardless of the material. In general, exposure mode did not significantly affect hardness after 7 days storage, regardless of storage media or material. After 7 days dry storage, hardness values increased for all materials relative to immediate testing, and decreased after water and ethanol storage, with ethanol showing the greatest effect. The experimental ormocer-based material had the lowest percentage hardness change and thus proved more resistant to solvent

  19. Advancing of Land Surface Temperature Retrieval Using Extreme Learning Machine and Spatio-Temporal Adaptive Data Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2015-04-01

    Full Text Available As a critical variable to characterize the biophysical processes in ecological environment, and as a key indicator in the surface energy balance, evapotranspiration and urban heat islands, Land Surface Temperature (LST retrieved from Thermal Infra-Red (TIR images at both high temporal and spatial resolution is in urgent need. However, due to the limitations of the existing satellite sensors, there is no earth observation which can obtain TIR at detailed spatial- and temporal-resolution simultaneously. Thus, several attempts of image fusion by blending the TIR data from high temporal resolution sensor with data from high spatial resolution sensor have been studied. This paper presents a novel data fusion method by integrating image fusion and spatio-temporal fusion techniques, for deriving LST datasets at 30 m spatial resolution from daily MODIS image and Landsat ETM+ images. The Landsat ETM+ TIR data were firstly enhanced based on extreme learning machine (ELM algorithm using neural network regression model, from 60 m to 30 m resolution. Then, the MODIS LST and enhanced Landsat ETM+ TIR data were fused by Spatio-temporal Adaptive Data Fusion Algorithm for Temperature mapping (SADFAT in order to derive high resolution synthetic data. The synthetic images were evaluated for both testing and simulated satellite images. The average difference (AD and absolute average difference (AAD are smaller than 1.7 K, where the correlation coefficient (CC and root-mean-square error (RMSE are 0.755 and 1.824, respectively, showing that the proposed method enhances the spatial resolution of the predicted LST images and preserves the spectral information at the same time.

  20. Single-source surface energy balance algorithms to estimate evapotranspiration from satellite-based remotely sensed data

    Science.gov (United States)

    Bhattarai, Nishan

    The flow of water and energy fluxes at the Earth's surface and within the climate system is difficult to quantify. Recent advances in remote sensing technologies have provided scientists with a useful means to improve characterization of these complex processes. However, many challenges remain that limit our ability to optimize remote sensing data in determining evapotranspiration (ET) and energy fluxes. For example, periodic cloud cover limits the operational use of remotely sensed data from passive sensors in monitoring seasonal fluxes. Additionally, there are many remote sensing-based single-source surface energy balance (SEB) models, but no clear guidance on which one to use in a particular application. Two widely used models---surface energy balance algorithm for land (SEBAL) and mapping ET at high resolution with internalized calibration (METRIC)---need substantial human-intervention that limits their applicability in broad-scale studies. This dissertation addressed some of these challenges by proposing novel ways to optimize available resources within the SEB-based ET modeling framework. A simple regression-based Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) fusion model was developed to integrate Landsat spatial and MODIS temporal characteristics in calculating ET. The fusion model produced reliable estimates of seasonal ET at moderate spatial resolution while mitigating the impact that cloud cover can have on image availability. The dissertation also evaluated five commonly used remote sensing-based single-source SEB models and found the surface energy balance system (SEBS) may be the best overall model for use in humid subtropical climates. The study also determined that model accuracy varies with land cover type, for example, all models worked well for wet marsh conditions, but the SEBAL and simplified surface energy balance index (S-SEBI) models worked better than the alternatives for grass cover. A new automated approach based on

  1. Accurate fluid force measurement based on control surface integration

    Science.gov (United States)

    Lentink, David

    2018-01-01

    Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non

  2. Surface plasmon resonance biosensor based on integrated optical waveguide

    Czech Academy of Sciences Publication Activity Database

    Dostálek, Jakub; Čtyroký, Jiří; Homola, Jiří; Brynda, Eduard; Skalský, Miroslav; Nekvindová, P.; Špirková, J.; Škvor, J.; Schröfel, J.

    2001-01-01

    Roč. 76, 1/3 (2001), s. 8-12 ISSN 0925-4005. [International Meeting on Chemical Sensors IMCS /8./. Basel, 02.07.2000-05.07.2000] R&D Projects: GA ČR GA102/99/M057; GA ČR GA102/99/0549; GA ČR GA102/00/1536 Institutional research plan: CEZ:AV0Z2067918 Keywords : surface plasmon resonance * optical sensors * integrated optics * biosensors * optical waveguides Subject RIV: JB - Sensors, Measurment, Regulation Impact factor: 1.440, year: 2001

  3. Dose calculation algorithms for radiation therapy with an MRI-Integrated radiation device

    International Nuclear Information System (INIS)

    Pfaffenberger, Asja

    2013-01-01

    Image-guided adaptive radiation therapy (IGART) aims at improving therapy outcome on the basis of more precise knowledge of the anatomical and physiological situation during treatment. By integration of magnetic resonance imaging (MRI), better differentiation is possible between the target volume to be irradiated and healthy surrounding tissues. In addition, changes that occur either between or during treatment fractions can be taken into account. On the basis of this information, a better conformation of radiation dose to the target volume may be achieved, which may in turn improve prognosis and reduce radiation side effects. This requires a precise calculation of radiation dose in a magnetic field that is present in these integrated irradiation devices. Real-time adaptation of the treatment plan is aimed at for which fast dose calculation is needed. Kernel-based methods are good candidates to achieve short calculation times; however, they presently only exist for radiation therapy in the absence of magnetic fields. This work suggests and investigates two approaches towards kernel-based dose calculation algorithms. One of them is integrated into treatment plan optimisation and applied to four clinical cases.

  4. Integrating R and Java for Enhancing Interactivity of Algorithmic Data Analysis Software Solutions

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNĂ

    2016-06-01

    Full Text Available Conceiving software solutions for statistical processing and algorithmic data analysis involves handling diverse data, fetched from various sources and in different formats, and presenting the results in a suggestive, tailorable manner. Our ongoing research aims to design programming technics for integrating R developing environment with Java programming language for interoperability at a source code level. The goal is to combine the intensive data processing capabilities of R programing language, along with the multitude of statistical function libraries, with the flexibility offered by Java programming language and platform, in terms of graphical user interface and mathematical function libraries. Both developing environments are multiplatform oriented, and can complement each other through interoperability. R is a comprehensive and concise programming language, benefiting from a continuously expanding and evolving set of packages for statistical analysis, developed by the open source community. While is a very efficient environment for statistical data processing, R platform lacks support for developing user friendly, interactive, graphical user interfaces (GUIs. Java on the other hand, is a high level object oriented programming language, which supports designing and developing performant and interactive frameworks for general purpose software solutions, through Java Foundation Classes, JavaFX and various graphical libraries. In this paper we treat both aspects of integration and interoperability that refer to integrating Java code into R applications, and bringing R processing sequences into Java driven software solutions. Our research has been conducted focusing on case studies concerning pattern recognition and cluster analysis.

  5. Surface charge algebra in gauge theories and thermodynamic integrability

    International Nuclear Information System (INIS)

    Barnich, Glenn; Compere, Geoffrey

    2008-01-01

    Surface charges and their algebra in interacting Lagrangian gauge field theories are constructed out of the underlying linearized theory using techniques from the variational calculus. In the case of exact solutions and symmetries, the surface charges are interpreted as a Pfaff system. Integrability is governed by Frobenius' theorem and the charges associated with the derived symmetry algebra are shown to vanish. In the asymptotic context, we provide a generalized covariant derivation of the result that the representation of the asymptotic symmetry algebra through charges may be centrally extended. Comparison with Hamiltonian and covariant phase space methods is made. All approaches are shown to agree for exact solutions and symmetries while there are differences in the asymptotic context

  6. An Assessment of Surface Water Detection Algorithms for the Tahoua Region, Niger

    Science.gov (United States)

    Herndon, K. E.; Muench, R.; Cherrington, E. A.; Griffin, R.

    2017-12-01

    The recent release of several global surface water datasets derived from remotely sensed data has allowed for unprecedented analysis of the earth's hydrologic processes at a global scale. However, some of these datasets fail to identify important sources of surface water, especially small ponds, in the Sahel, an arid region of Africa that forms a border zone between the Sahara Desert to the north, and the savannah to the south. These ponds may seem insignificant in the context of wider, global-scale hydrologic processes, but smaller sources of water are important for local and regional assessments. Particularly, these smaller water bodies are significant sources of hydration and irrigation for nomadic pastoralists and smallholder farmers throughout the Sahel. For this study, several methods of identifying surface water from Landsat 8 OLI and Sentinel 1 SAR data were compared to determine the most effective means of delineating these features in the Tahoua Region of Niger. The Modified Normalized Difference Water Index (MNDWI) had the best performance when validated against very high resolution World View 3 imagery, with an overall accuracy of 99.48%. This study reiterates the importance of region-specific algorithms and suggests that the MNDWI method may be the best for delineating surface water in the Sahelian ecozone, likely due to the nature of the exposed geology and lack of dense green vegetation.

  7. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    Energy Technology Data Exchange (ETDEWEB)

    Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  8. Assessment of diverse algorithms applied on MODIS Aqua and Terra data over land surfaces in Europe

    Science.gov (United States)

    Glantz, P.; Tesche, M.

    2012-04-01

    Beside an increase of greenhouse gases (e.g., carbon dioxide, methane and nitrous oxide) human activities (for instance fossil fuel and biomass burning) have lead to perturbation of the atmospheric content of aerosol particles. Aerosols exhibits high spatial and temporal variability in the atmosphere. Therefore, aerosol investigation for climate research and environmental control require the identification of source regions, their strength and aerosol type, which can be retrieved based on space-borne observations. The aim of the present study is to validate and evaluate AOT (aerosol optical thickness) and Ångström exponent, obtained with the SAER (Satellite AErosol Retrieval) algorithm for MODIS (MODerate resolution Imaging Spectroradiometer) Aqua and Terra calibrated level 1 data (1 km horizontal resolution at ground), against AERONET (AErosol RObotic NETwork) observations and MODIS Collection 5 (c005) standard product retrievals (10 km), respectively, over land surfaces in Europe for the seasons; early spring (period 1), mid spring (period 2) and summer (period 3). For several of the cases analyzed here the Aqua and Terra satellites passed the investigation area twice during a day. Thus, beside a variation in the sun elevation the satellite aerosol retrievals have also on a daily basis been performed with a significant variation in the satellite-viewing geometry. An inter-comparison of the two algorithms has also been performed. The validation with AERONET shows that the MODIS c005 retrieved AOT is, for the wavelengths 0.469 and 0.500 nm, on the whole within the expected uncertainty for one standard deviation of the MODIS retrievals over Europe (Δτ = ±0.05 ± 0.15τ). The SAER estimated AOT for the wavelength 0.443 nm also agree reasonable well with AERONET. Thus, the majority of the SAER AOT values are within the MODIS expected uncertainty range, although somewhat larger RMSD (root mean square deviation) occurs compared to the results obtained with the

  9. Algorithm for the synthesis of linear antenna arrays with desired radiation pattern and integral amplitude coefficients

    Directory of Open Access Journals (Sweden)

    Sadchenko A. V.

    2015-06-01

    Full Text Available Ahe problem of technical implementation of phased array antennas (PAR with the required radiation pattern (RP is the complexity of the construction of the beamforming device that consists of a set of controlled attenuators and phase shifters. It is possible to simplify the technical implementation of PAR, if complex representation of coefficients of amplitude-phase distribution of the field along the lattice is approximated by real values in the synthesis stage. It is known that the amplitude distribution of the field in the aperture of the antenna array and the radiation pattern are associated with Fourier transform. Thus, the amplitude and phase coefficients are first calculated using the Fourier transform, and then processed according to the selected type of circuit realization of attenuators and phase shifters. The calculation of the inverse Fourier transform of the modified coefficients allows calculating the synthesized orientation function. This study aims to develop a search algorithm for amplitude and phase coefficients, taking into account the fact that integer-valued amplitudes and phases are technically easier to implement than real ones. Synthesis algorithm for equidistant linear array with a half-wavelength irradiators pitch (&l;/2 is as follows. From a given directivity function the discrete Fourier transform (DFT in the form of an array of complex numbers is found, the resulting array is then transformed into a set of attenuations for attenuators and phase shifts for phase shifters, while the amplitude coefficients are rounded off to integers, and phases are binarizated (0, ?. The practical value of this algorithm is particularly high when using controlled phase shifters and attenuators integrally. The work confirms the possibility of a thermoelectric converter of human body application for an electronic medical thermometer power supply.

  10. An Algorithm for Retrieving Land Surface Temperatures Using VIIRS Data in Combination with Multi-Sensors

    Science.gov (United States)

    Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao

    2014-01-01

    A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919

  11. Therapeutic eyelids hygiene in the algorithms of prevention and treatment of ocular surface diseases

    Directory of Open Access Journals (Sweden)

    V. N. Trubilin

    2016-01-01

    Full Text Available When acute inflammation in anterior eye segment of a forward piece of an eye was stopped, ophthalmologists face a problem of absence of acute inflammation signs and at the same time complaints to the remain discomfort feelings. It causes dissatisfaction from the treatment. The complaints are typically caused by disturbance of tears productions. No accidental that the new group of diseases was allocated — the diseases of the ocular surface. Ocular surface is a difficult biologic system, including epithelium of the conjunctiva, cornea and limb, as well as the area costal margin eyelid and meibomian gland ducts. Pathological processes in conjunctiva, cornea and eyelids are linked with tears production. Ophthalmologists prescribes tears substitutions, providing short-term relief to patients. However, in respect that the lipid component of the tear film plays the key role in the preservation of its stability, eyelids hygiene is the basis for the treatment of dry eye associated with ocular surface diseases. Eyelids hygiene provides normal functioning of glands, restores the metabolic processes in skin and ensures the formation of a complete tear film. Protection of eyelids, especially the marginal edge from aggressive environmental agents, infections and parasites and is the basis for the prevention and treatment of blepharitis and dry eye syndrome. The most common clinical situations and algorithms of their treatment and prevention of dysfunction of the meibomian glands; demodectic blepharitis; seborrheic blepharitis; staphylococcal blepharitis; allergic blepharitis; barley and chalazion are discussed in the article. The prevention keratoconjunctival xerosis (before and postoperative period, caused by contact lenses, computer vision syndrome, remission after acute conjunctiva and cornea inflammation is also presented. The first part of the article presents the treatment and prevention algorithms for dysfunction of the meibomian glands, as well as

  12. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    Science.gov (United States)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  13. Evaluation of the application of BIM technology based on PCA - Q Clustering Algorithm and Choquet Integral

    Directory of Open Access Journals (Sweden)

    Wei Xiaozhao

    2016-03-01

    Full Text Available For the development of the construction industry, the construction of data era is approaching, BIM (building information model with the actual needs of the construction industry has been widely used as a building information clan system software, different software for the practical application of different maturity, through the expert scoring method for the application of BIM technology maturity index mark, establish the evaluation index system, using PCA - Q clustering algorithm for the evaluation index system of classification, comprehensive evaluation in combination with the Choquet integral on the classification of evaluation index system, to achieve a reasonable assessment of the application of BIM technology maturity index. To lay a foundation for the future development of BIM Technology in various fields of construction, at the same time provides direction for the comprehensive application of BIM technology.

  14. Innovative method by design-around concepts with integrating the algorithm for inventive problem solving

    International Nuclear Information System (INIS)

    Chen, Wang Chih; Chen Jahau Lewis

    2014-01-01

    The work proposes a new design tool that integrates design-around concepts with the algorithm for inventive problem solving (Russian acronym: ARIZ). ARIZ includes a complete procedure for analyzing problems and related resource, resolving conflicts and generating solutions. The combination of ARIZ and design-around concepts and understanding identified principles that govern patent infringements can prevent patent infringements whenever designers innovate, greatly reducing the cost and time associated with the product design stage. The presented tool is developed from an engineering perspective rather than a legal perspective, and so can help designers easily to prevent patent infringements and succeed in innovating by designing around. An example is used to demonstrate the proposed method.

  15. Evaluation of the Application of BIM Technology Based on PCA - Q Clustering Algorithm and Choquet Integral

    Directory of Open Access Journals (Sweden)

    Wei Xiaozhao

    2016-09-01

    Full Text Available For the development of the construction industry, the construction of data era is approaching, BIM (building information model with the actual needs of the construction industry has been widely used as a building information clan system software, different software for the practical application of different maturity, through the expert scoring method for the application of BIM technology maturity index mark, establish the evaluation index system, using PCA - Q clustering algorithm for the evaluation index system of classification, comprehensive evaluation in combination with the Choquet integral on the classification of evaluation index system, to achieve a reasonable assessment of the application of BIM technology maturity index. To lay a foundation for the future development of BIM Technology in various fields of construction, at the same time provides direction for the comprehensive application of BIM technology.

  16. Optimal multigrid algorithms for the massive Gaussian model and path integrals

    International Nuclear Information System (INIS)

    Brandt, A.; Galun, M.

    1996-01-01

    Multigrid algorithms are presented which, in addition to eliminating the critical slowing down, can also eliminate the open-quotes volume factorclose quotes. The elimination of the volume factor removes the need to produce many independent fine-grid configurations for averaging out their statistical deviations, by averaging over the many samples produced on coarse grids during the multigrid cycle. Thermodynamic limits of observables can be calculated to relative accuracy var-epsilon r in just O(var-epsilon r -2 ) computer operations, where var-epsilon r is the error relative to the standard deviation of the observable. In this paper, we describe in detail the calculation of the susceptibility in the one-dimensional massive Gaussian model, which is also a simple example of path integrals. Numerical experiments show that the susceptibility can be calculated to relative accuracy var-epsilon r in about 8 var-epsilon r -2 random number generations, independent of the mass size

  17. A load shedding scheme for DG integrated islanded power system utilizing backtracking search algorithm

    Directory of Open Access Journals (Sweden)

    Aziah Khamis

    2018-03-01

    Full Text Available In a dispersed generation (DG integrated distribution system, several technical issues should be resolved if the grid disconnects and forms an islanded system. The most critical challenge in such a situation is to maintain the stability of the islanded system. The common practice is to reject several loads through a load shedding scheme. This study introduces a development of an optimal load shedding scheme based on backtracking search algorithm (BSA. To handle this optimization problem, a constraint multiobjective function that considers the linear static voltage stability margin (VSM and amount of load curtailment is formulated. It also handles the load priority and various operating conditions of DGs. The performance of the proposed load shedding scheme was evaluated through an extensive test conducted on the IEEE 33-bus radial distribution system with four DG units considering several scenarios such as load shedding under various operating points and at various islands using the MATLAB® software. Moreover, the effectiveness of the proposed scheme was validated by comparing its results with those obtained using the genetic algorithm (GA. The optimization results indicate that the proposed BSA technique is more effective in determining the optimal amount of load to be shed in any islanded system compared with GA.

  18. Exploration mode affects visuohaptic integration of surface orientation.

    Science.gov (United States)

    Plaisier, Myrthe A; van Dam, Loes C J; Glowania, Catharina; Ernst, Marc O

    2014-11-20

    We experience the world mostly in a multisensory fashion using a combination of all of our senses. Depending on the modality we can select different exploration strategies for extracting perceptual information. For instance, using touch we can enclose an object in our hand to explore parts of the object in parallel. Alternatively, we can trace the object with a single finger to explore its parts in a serial fashion. In this study we investigated whether the exploration mode (parallel vs. serial) affects the way sensory signals are combined. To this end, participants visually and haptically explored surfaces that varied in roll angle and indicated which side of the surface was perceived as higher. In Experiment 1, the exploration mode was the same for both modalities (i.e., both parallel or both serial). In Experiment 2, we introduced a difference in exploration mode between the two modalities (visual exploration was parallel while haptic exploration was serial or vice versa). The results showed that visual and haptic signals were combined in a statistically optimal fashion only when the exploration modes were the same. In case of an asymmetry in the exploration modes across modalities, integration was suboptimal. This indicates that spatial-temporal discrepancies in the acquisition of information in the two senses (i.e., haptic and visual) can lead to the breakdown of sensory integration. © 2014 ARVO.

  19. Integrated algorithms for RFID-based multi-sensor indoor/outdoor positioning solutions

    Science.gov (United States)

    Zhu, Mi.; Retscher, G.; Zhang, K.

    2011-12-01

    Position information is very important as people need it almost everywhere all the time. However, it is a challenging task to provide precise positions indoor/outdoor seamlessly. Outdoor positioning has been widely studied and accurate positions can usually be achieved by well developed GPS techniques but these techniques are difficult to be used indoors since GPS signal reception is limited. The alternative techniques that can be used for indoor positioning include, to name a few, Wireless Local Area Network (WLAN), bluetooth and Ultra Wideband (UWB) etc.. However, all of these have limitations. The main objectives of this paper are to investigate and develop algorithms for a low-cost and portable indoor personal positioning system using Radio Frequency Identification (RFID) and its integration with other positioning systems. An RFID system consists of three components, namely a control unit, an interrogator and a transponder that transmits data and communicates with the reader. An RFID tag can be incorporated into a product, animal or person for the purpose of identification and tracking using radio waves. In general, for RFID positioning in urban and indoor environments three different methods can be used, including cellular positioning, trilateration and location fingerprinting. In addition, the integration of RFID with other technologies is also discussed in this paper. A typical combination is to integrate RFID with relative positioning technologies such as MEMS INS to bridge the gaps between RFID tags for continuous positioning applications. Experiments are shown to demonstrate the improvements of integrating multiple sensors with RFID which can be employed successfully for personal positioning.

  20. Adaptive Multiview Nonnegative Matrix Factorization Algorithm for Integration of Multimodal Biomedical Data

    Directory of Open Access Journals (Sweden)

    Bisakha Ray

    2017-08-01

    Full Text Available The amounts and types of available multimodal tumor data are rapidly increasing, and their integration is critical for fully understanding the underlying cancer biology and personalizing treatment. However, the development of methods for effectively integrating multimodal data in a principled manner is lagging behind our ability to generate the data. In this article, we introduce an extension to a multiview nonnegative matrix factorization algorithm (NNMF for dimensionality reduction and integration of heterogeneous data types and compare the predictive modeling performance of the method on unimodal and multimodal data. We also present a comparative evaluation of our novel multiview approach and current data integration methods. Our work provides an efficient method to extend an existing dimensionality reduction method. We report rigorous evaluation of the method on large-scale quantitative protein and phosphoprotein tumor data from the Clinical Proteomic Tumor Analysis Consortium (CPTAC acquired using state-of-the-art liquid chromatography mass spectrometry. Exome sequencing and RNA-Seq data were also available from The Cancer Genome Atlas for the same tumors. For unimodal data, in case of breast cancer, transcript levels were most predictive of estrogen and progesterone receptor status and copy number variation of human epidermal growth factor receptor 2 status. For ovarian and colon cancers, phosphoprotein and protein levels were most predictive of tumor grade and stage and residual tumor, respectively. When multiview NNMF was applied to multimodal data to predict outcomes, the improvement in performance is not overall statistically significant beyond unimodal data, suggesting that proteomics data may contain more predictive information regarding tumor phenotypes than transcript levels, probably due to the fact that proteins are the functional gene products and therefore a more direct measurement of the functional state of the tumor. Here, we

  1. Exergoeconomic analysis and optimization of an Integrated Solar Combined Cycle System (ISCCS) using genetic algorithm

    International Nuclear Information System (INIS)

    Baghernejad, A.; Yaghoubi, M.

    2011-01-01

    Research highlights: → We applied thermoeconomic concept for optimization of an Integrated Solar Combined Cycle System (ISCCS) using genetic algorithm. → Optimization process improves the total performance of the system in a way that the objective function is decreased by 10.98%, the exergetic efficiency of the system is increased from about 43.79% to 46.8% and the rate of fuel cost is decreased by 7.23%. → Cost of electricity produced by steam turbine and gas turbine in the optimum design condition of the ISCCS are about 7.1% and 1.17% lower with respect to the base case. → Increasing solar field operation periods from 1000 to 2000 hours per year reduces the unit cost of electricity produced by steam turbine about 14%. → The unit cost of electricity has a linear and remarkable increase with fuel cost. Also by increasing the system construction period from 3 to 6 years, the unit cost of electricity produced by steam turbine increased about 13%. -- Abstract: In this study, thermoeconomic concept is applied using genetic algorithm for optimization of an Integrated Solar Combined Cycle System (ISCCS) that produces 400 MW of electricity. Attempt is made to minimize objective function including investment cost of equipments and cost of exergy destruction. Optimization process carried out by using exergoeconomic principles and genetic algorithm. The developed code first validated with a thermal system and good comparison is observed. Then the analysis is made for the ISCCS, and it shows that objective function for the optimum operation reduced by about 11%. Also cost of electricity produced by steam turbine and gas turbine in the optimum design of the ISCCS are about 7.1% and 1.17% lower with respect to the base case. These objectives are achieved with 13.3% increase in capital investment. Finally, sensitivity analysis is carried out to study the effect of changes in the unit cost of electricity for the system important parameters such as interest rate, plant

  2. Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm

    Directory of Open Access Journals (Sweden)

    Daeho Jang

    2015-09-01

    Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.

  3. Characterization and noninvasive diagnosis of bladder cancer with serum surface enhanced Raman spectroscopy and genetic algorithms

    Science.gov (United States)

    Li, Shaoxin; Li, Linfang; Zeng, Qiuyao; Zhang, Yanjiao; Guo, Zhouyi; Liu, Zhiming; Jin, Mei; Su, Chengkang; Lin, Lin; Xu, Junfa; Liu, Songhao

    2015-05-01

    This study aims to characterize and classify serum surface-enhanced Raman spectroscopy (SERS) spectra between bladder cancer patients and normal volunteers by genetic algorithms (GAs) combined with linear discriminate analysis (LDA). Two group serum SERS spectra excited with nanoparticles are collected from healthy volunteers (n = 36) and bladder cancer patients (n = 55). Six diagnostic Raman bands in the regions of 481-486, 682-687, 1018-1034, 1313-1323, 1450-1459 and 1582-1587 cm-1 related to proteins, nucleic acids and lipids are picked out with the GAs and LDA. By the diagnostic models built with the identified six Raman bands, the improved diagnostic sensitivity of 90.9% and specificity of 100% were acquired for classifying bladder cancer patients from normal serum SERS spectra. The results are superior to the sensitivity of 74.6% and specificity of 97.2% obtained with principal component analysis by the same serum SERS spectra dataset. Receiver operating characteristic (ROC) curves further confirmed the efficiency of diagnostic algorithm based on GA-LDA technique. This exploratory work demonstrates that the serum SERS associated with GA-LDA technique has enormous potential to characterize and non-invasively detect bladder cancer through peripheral blood.

  4. NASA/GEWEX Surface Radiation Budget: First Results From The Release 4 GEWEX Integrated Data Products

    Science.gov (United States)

    Stackhouse, Paul; Cox, Stephen; Gupta, Shashi; Mikovitz, J. Colleen; zhang, taiping

    2016-04-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current release 3 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number should help improve the RMS of the existing products and allow for future higher resolution SRB gridded product (e.g. 0.5 degree). In addition to the input data improvements, several important algorithm improvements have been made. Most notable has been the adaptation of Angular Distribution Models (ADMs) from CERES to improve the initial calculation of shortwave TOA fluxes, from which the surface flux calculations follow. Other key input improvements include a detailed aerosol history using the Max Planck Institut Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data, the various other improved input data sets and the incorporation of many additional internal SRB model improvements. As of the time of abstract submission, results from 2007 have been produced with ISCCP H availability the limiting factor. More SRB data will be produced as ISCCP reprocessing continues. The SRB data produced will be released as part of the Release 4.0 Integrated Product, recognizing the interdependence of the radiative fluxes with other GEWEX products providing estimates of the Earth's global water and energy cycle (I.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).

  5. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    Science.gov (United States)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  6. Application of response surface methodology (RSM) and genetic algorithm in minimizing warpage on side arm

    Science.gov (United States)

    Raimee, N. A.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    The plastic injection moulding process produces large numbers of parts of high quality with great accuracy and quickly. It has widely used for production of plastic part with various shapes and geometries. Side arm is one of the product using injection moulding to manufacture it. However, there are some difficulties in adjusting the parameter variables which are mould temperature, melt temperature, packing pressure, packing time and cooling time as there are warpage happen at the tip part of side arm. Therefore, the work reported herein is about minimizing warpage on side arm product by optimizing the process parameter using Response Surface Methodology (RSM) and with additional artificial intelligence (AI) method which is Genetic Algorithm (GA).

  7. Simulation of Gravity Wave Propagation in Free Surface Flows by an Incompressible SPH Algorithm

    International Nuclear Information System (INIS)

    Amanifard, N.; Mahnama, S. M.; Neshaei, S. A. L.; Mehrdad, M. A.; Farahani, M. H.

    2012-01-01

    This paper presents an incompressible smoothed particle hydrodynamics model to simulate wave propagation in a free surface flow. The Navier-Stokes equations are solved in a Lagrangian framework using a three-step fractional method. In the first step, a temporary velocity field is provided according to the relevant body forces. This velocity field is renewed in the second step to include the viscosity effects. A Poisson equation is employed in the third step as an alternative for the equation of state in order to evaluate pressure. This Poisson equation considers a trade-off between density and pressure which is utilized in the third step to impose the incompressibility effect. The computations are compared with the experimental as well as numerical data and a good agreement is observed. In order to validate proposed algorithm, a dam-break problem is solved as a benchmark solution and the computational results are compared with the previous numerical ones.

  8. An improved algorithm for the polycrystal viscoplastic self-consistent model and its integration with implicit finite element schemes

    International Nuclear Information System (INIS)

    Galán, J; Verleysen, P; Lebensohn, R A

    2014-01-01

    A new algorithm for the solution of the deformation of a polycrystalline material using a self-consistent scheme, and its integration as part of the finite element software Abaqus/Standard are presented. The method is based on the original VPSC formulation by Lebensohn and Tomé and its integration with Abaqus/Standard by Segurado et al. The new algorithm has been implemented as a set of Fortran 90 modules, to be used either from a standalone program or from Abaqus subroutines. The new implementation yields the same results as VPSC7, but with a significantly better performance, especially when used in multicore computers. (paper)

  9. Surface profile measurement by using the integrated Linnik WLSI and confocal microscope system

    Science.gov (United States)

    Wang, Wei-Chung; Shen, Ming-Hsing; Hwang, Chi-Hung; Yu, Yun-Ting; Wang, Tzu-Fong

    2017-06-01

    The white-light scanning interferometer (WLSI) and confocal microscope (CM) are the two major optical inspection systems for measuring three-dimensional (3D) surface profile (SP) of micro specimens. Nevertheless, in practical applications, WLSI is more suitable for measuring smooth and low-slope surfaces. On the other hand, CM is more suitable for measuring uneven-reflective and low-reflective surfaces. As for aspect of surface profiles to be measured, the characteristics of WLSI and CM are also different. WLSI is generally used in semiconductor industry while CM is more popular in printed circuit board industry. In this paper, a self-assembled multi-function optical system was integrated to perform Linnik white-light scanning interferometer (Linnik WLSI) and CM. A connecting part composed of tubes, lenses and interferometer was used to conjunct finite and infinite optical systems for Linnik WLSI and CM in the self-assembled optical system. By adopting the flexibility of tubes and lenses, switching to perform two different optical measurements can be easily achieved. Furthermore, based on the shape from focus method with energy of Laplacian filter, the CM was developed to enhance the on focal information of each pixel so that the CM can provide all-in-focus image for performing the 3D SP measurement and analysis simultaneously. As for Linnik WLSI, eleven-step phase shifting algorithm was used to analyze vertical scanning signals and determine the 3D SP.

  10. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    Science.gov (United States)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  11. Technical note: Efficient online source identification algorithm for integration within a contamination event management system

    Science.gov (United States)

    Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai

    2017-07-01

    Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.

  12. A STUDY OF DEAD-RECKONING ALGORITHM FOR MECANUM WHEEL BASED MOBILE ROBOT UNDER VARIOUS TYPES OF ROAD SURFACE

    OpenAIRE

    上町, 亮介; KAMMACHI, Ryosuke

    2015-01-01

    In this paper, we describe about a study of dead-reckoning algorithm for mecanum wheel based mobile robot under various types of road surface. Because of mecanum wheel based mobile robot can move omni-direction by utilizing tire-road surface friction. Therefore depending on road surface condition, it is difficult to estimate accurate self-position by applying conventional dead-reckoning method. In order to overcome inaccuracy of conventional dead-reckoning method for mecanum wheel based mobil...

  13. High-resolution random mesh algorithms for creating a probabilistic 3D surface atlas of the human brain.

    Science.gov (United States)

    Thompson, P M; Schwartz, C; Toga, A W

    1996-02-01

    Striking variations exist, across individuals, in the internal and external geometry of the brain. Such normal variations in the size, orientation, topology, and geometric complexity of cortical and subcortical structures have complicated the problem of quantifying deviations from normal anatomy and of developing standardized neuroanatomical atlases. This paper describes the design, implementation, and results of a technique for creating a three-dimensional (3D) probabilistic surface atlas of the human brain. We have developed, implemented, and tested a new 3D statistical method for assessing structural variations in a data-base of anatomic images. The algorithm enables the internal surface anatomy of new subjects to be analyzed at an extremely local level. The goal was to quantify subtle and distributed patterns of deviation from normal anatomy by automatically generating detailed probability maps of the anatomy of new subjects. Connected systems of parametric meshes were used to model the internal course of the following structures in both hemispheres: the parieto-occipital sulcus, the anterior and posterior rami of the calcarine sulcus, the cingulate and marginal sulci, and the supracallosal sulcus. These sulci penetrate sufficiently deeply into the brain to introduce an obvious topological decomposition of its volume architecture. A family of surface maps was constructed, encoding statistical properties of local anatomical variation within individual sulci. A probability space of random transformations, based on the theory of Gaussian random fields, was developed to reflect the observed variability in stereotaxic space of the connected system of anatomic surfaces. A complete system of probability density functions was computed, yielding confidence limits on surface variation. The ultimate goal of brain mapping is to provide a framework for integrating functional and anatomical data across many subjects and modalities. This task requires precise quantitative

  14. Adaptive integral dynamic surface control of a hypersonic flight vehicle

    Science.gov (United States)

    Aslam Butt, Waseem; Yan, Lin; Amezquita S., Kendrick

    2015-07-01

    In this article, non-linear adaptive dynamic surface air speed and flight path angle control designs are presented for the longitudinal dynamics of a flexible hypersonic flight vehicle. The tracking performance of the control design is enhanced by introducing a novel integral term that caters to avoiding a large initial control signal. To ensure feasibility, the design scheme incorporates magnitude and rate constraints on the actuator commands. The uncertain non-linear functions are approximated by an efficient use of the neural networks to reduce the computational load. A detailed stability analysis shows that all closed-loop signals are uniformly ultimately bounded and the ? tracking performance is guaranteed. The robustness of the design scheme is verified through numerical simulations of the flexible flight vehicle model.

  15. A physics-based algorithm for retrieving land-surface emissivity and temperature from EOS/MODIS data

    International Nuclear Information System (INIS)

    Wan, Z.; Li, Z.L.

    1997-01-01

    The authors have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NEΔT) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4--0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10--12.5 microm IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2--3 K

  16. Thermal weapon sights with integrated fire control computers: algorithms and experiences

    Science.gov (United States)

    Rothe, Hendrik; Graswald, Markus; Breiter, Rainer

    2008-04-01

    The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.

  17. A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Xin Li

    2015-09-01

    Full Text Available This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians’ different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians’ moving status (direct movement or making a turn to determine adaptively the noise parameters in an Extended Kalman Filter (EKF system. This method has worked very well in the elimination of various phenomena, including the “go and back” phenomenon caused by the instability of the Bluetooth-based positioning system and the “cross-wall” phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI building in the China University of Mining and Technology (CUMT campus showed that the proposed scheme can reliably achieve a 2-meter precision.

  18. A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System.

    Science.gov (United States)

    Li, Xin; Wang, Jian; Liu, Chunyan

    2015-09-25

    This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians' different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians' moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the "go and back" phenomenon caused by the instability of the Bluetooth-based positioning system and the "cross-wall" phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision.

  19. Hybrid of Natural Element Method (NEM with Genetic Algorithm (GA to find critical slip surface

    Directory of Open Access Journals (Sweden)

    Shahriar Shahrokhabadi

    2014-06-01

    Full Text Available One of the most important issues in geotechnical engineering is the slope stability analysis for determination of the factor of safety and the probable slip surface. Finite Element Method (FEM is well suited for numerical study of advanced geotechnical problems. However, mesh requirements of FEM creates some difficulties for solution processing in certain problems. Recently, motivated by these limitations, several new Meshfree methods such as Natural Element Method (NEM have been used to analyze engineering problems. This paper presents advantages of using NEM in 2D slope stability analysis and Genetic Algorithm (GA optimization to determine the probable slip surface and the related factor of safety. The stress field is produced under plane strain condition using natural element formulation to simulate material behavior analysis utilized in conjunction with a conventional limit equilibrium method. In order to justify the preciseness and convergence of the proposed method, two kinds of examples, homogenous and non-homogenous, are conducted and results are compared with FEM and conventional limit equilibrium methods. The results show the robustness of the NEM in slope stability analysis.

  20. A full waveform tomography algorithm for teleseismic body and surface waves in 2.5 dimensions

    Science.gov (United States)

    Baker, B.; Roecker, S.

    2014-09-01

    We describe a 2.5-D, frequency domain, viscoelastic waveform tomography algorithm for imaging with seismograms of teleseismic body and surface waves recorded by quasi-linear arrays. The equations of motion are discretized with p-adaptive finite elements that allow for geometric flexibility and accurate solutions as a function of wavelength. Artificial forces are introduced into the media by specifying a known wavefield along the model edges and solving for the corresponding scattered field. Because of the relatively low frequency content of teleseismic data, regional scale tectonic settings can be parametrized with a modest number of variables and perturbations can be determined directly from a regularized Gauss-Newton system of equations. Waveforms generated by the forward problem compare well with analytic solutions for simple 1-D and 2-D media. Tests of different approaches to the inverse problem show that the use of an approximate Hessian serves to properly focus the scattered field. We also find that while full waveform inversion can provide significantly better resolution than standard techniques for both body and surface wave tomography modelled individually, joint inversion both enhances resolution and mitigates potential artefacts.

  1. Identification of novel adhesins of M. tuberculosis H37Rv using integrated approach of multiple computational algorithms and experimental analysis.

    Directory of Open Access Journals (Sweden)

    Sanjiv Kumar

    Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.

  2. Adaptive Fuzzy Integral Sliding-Mode Regulator for Induction Motor Using Nonlinear Sliding Surface

    OpenAIRE

    Yong-Kun Lu

    2015-01-01

    An adaptive fuzzy integral sliding-mode controller using nonlinear sliding surface is designed for the speed regulator of a field-oriented induction motor drive in this paper. Combining the conventional integral sliding surface with fractional-order integral, a nonlinear sliding surface is proposed for the integral sliding-mode speed control, which can overcome the windup problem and the convergence speed problem. An adaptive fuzzy control term is utilized to approximate the uncertainty. The ...

  3. Integrating optical finger motion tracking with surface touch events

    Science.gov (United States)

    MacRitchie, Jennifer; McPherson, Andrew P.

    2015-01-01

    This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction. PMID:26082732

  4. SURFACE INTEGRITY EVALUATION OF TURNING WITH AUTO-ROTATING TOOL

    Directory of Open Access Journals (Sweden)

    Jozef Struharnansky

    2016-09-01

    Full Text Available The technical practice requirements comes to have increased demands on higher productivity, speed and quality of the machining process of various materials. Hard to machine materials, whose machining led to the development of turning with rotating cutting edge are not an exception. The machining process of auto-rotating tool is more complicated than the conventional process of turning, especially for the process of reshaping cutting layers into chips. There is a significant load in the system, that may affect the life of the cutting edge of the tool as well as the whole system and also in the final extent of the qualitative parameters of the workpiece (product / product. The article specifies the knowledge and findings of measurement in machining material 100Cr6 with an auto-rotating tool. The measurements were conducted to evaluate the integrity of the surface (roughness of the workpiece to the impacts of cutting conditions, in particular the feed and the cutting edge inclination. It also analyzes the presence (size, character, action of residual stresses concentrated in the surface layers of the workpiece by changing the cutting conditions.

  5. Integrating optical finger motion tracking with surface touch events

    Directory of Open Access Journals (Sweden)

    Jennifer eMacRitchie

    2015-06-01

    Full Text Available This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterisation of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction.

  6. Programmer's guide to the fuzzy logic ramp metering algorithm : software design, integration, testing, and evaluation

    Science.gov (United States)

    2000-02-01

    A Fuzzy Logic Ramp Metering Algorithm was implemented on 126 ramps in the greater Seattle area. This report documents the implementation of the Fuzzy Logic Ramp Metering Algorithm at the Northwest District of the Washington State Department of Transp...

  7. About the Use of the HdHr Algorithm Group in Integrating the Movement Equation with Nonlinear Terms

    Directory of Open Access Journals (Sweden)

    Heitor Miranda Bottura

    2009-01-01

    Full Text Available This work summarizes the HdHr group of Hermitian integration algorithms for dynamic structural analysis applications. It proposes a procedure for their use when nonlinear terms are present in the equilibrium equation. The simple pendulum problem is solved as a first example and the numerical results are discussed. Directions to be pursued in future research are also mentioned.

  8. Control of an Autonomous Radio-Controlled Helicopter in a Modified Simulation Environment Using Proportional Integral Derivative Algorithms

    National Research Council Canada - National Science Library

    Brown, Ainsmar X; Garcia, Richard D

    2008-01-01

    .... A proportional integral derivative control algorithm was modeled in MathWorks Simulink and communicates to a flight simulator modeling a physical radio-controlled helicopter. Waypoint navigation and flight-envelope testing were then systematically evaluated to the final goal of a feasible autopilot design.

  9. An advanced algorithm for construction of Integral Transport Matrix Method operators using accumulation of single cell coupling factors

    International Nuclear Information System (INIS)

    Powell, B. P.; Azmy, Y. Y.

    2013-01-01

    The Integral Transport Matrix Method (ITMM) has been shown to be an effective method for solving the neutron transport equation in large domains on massively parallel architectures. In the limit of very large number of processors, the speed of the algorithm, and its suitability for unstructured meshes, i.e. other than an ordered Cartesian grid, is limited by the construction of four matrix operators required for obtaining the solution in each sub-domain. The existing algorithm used for construction of these matrix operators, termed the differential mesh sweep, is computationally expensive and was developed for a structured grid. This work proposes the use of a new algorithm for construction of these operators based on the construction of a single, fundamental matrix representing the transport of a particle along every possible path throughout the sub-domain mesh. Each of the operators is constructed by multiplying an element of this fundamental matrix by two factors dependent only upon the operator being constructed and on properties of the emitting and incident cells. The ITMM matrix operator construction time for the new algorithm is demonstrated to be shorter than the existing algorithm in all tested cases with both isotropic and anisotropic scattering considered. While also being a more efficient algorithm on a structured Cartesian grid, the new algorithm is promising in its geometric robustness and potential for being applied to an unstructured mesh, with the ultimate goal of application to an unstructured tetrahedral mesh on a massively parallel architecture. (authors)

  10. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    Science.gov (United States)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at

  11. THERAPEUTIC EYELIDS HYGIENE IN THE ALGORITHMS OF PREVENTION AND TREATMENT OF OCULAR SURFACE DISEASES. PART II

    Directory of Open Access Journals (Sweden)

    V. N. Trubilin

    2016-01-01

    problem of modern ophthalmology.Part 1 — Trubilin VN, Poluninа EG, Kurenkov VV, Kapkova SG, Markova EY, Therapeutic eyelids hygiene in the algorithms of prevention and treatment of ocular surface diseases. Ophthalmology in Russia. 2016;13(2:122–127 doi: 10.18008/1816–5095– 2016–2–122–127

  12. Analysis of Leaky Modes in Photonic Crystal Fibers Using the Surface Integral Equation Method

    Directory of Open Access Journals (Sweden)

    Jung-Sheng Chiang

    2018-04-01

    Full Text Available A fully vectorial algorithm based on the surface integral equation method for the modelling of leaky modes in photonic crystal fibers (PCFs by solely solving the complex propagation constants of characteristic equations is presented. It can be used for calculations of the complex effective index and confinement losses of photonic crystal fibers. As complex root examination is the key technique in the solution, the new algorithm which possesses this technique can be used to solve the leaky modes of photonic crystal fibers. The leaky modes of solid-core PCFs with a hexagonal lattice of circular air-holes are reported and discussed. The simulation results indicate how the confinement loss by the imaginary part of the effective index changes with air-hole size, the number of rings of air-holes, and wavelength. Confinement loss reductions can be realized by increasing the air-hole size and the number of air-holes. The results show that the confinement loss rises with wavelength, implying that the light leaks more easily for longer wavelengths; meanwhile, the losses are decreased significantly as the air-hole size d/Λ is increased.

  13. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  14. Integrated decision making model for urban disaster management: A multi-objective genetic algorithm approach

    Directory of Open Access Journals (Sweden)

    V. Esmaeili

    2014-01-01

    Full Text Available In recent decays, there has been an extensive improvement in technology and knowledge; hence, human societies have started to fortify their urban environment against the natural disasters in order to diminish the context of vulnerability. Local administrators as well as government officials are thinking about new options for disaster management programs within their territories. Planning to set up local disaster management facilities and stock pre-positioning of relief items can keep an urban area prepared for a natural disaster. In this paper, based on a real-world case study for a municipal district in Tehran, a multi-objective mathematical model is developed for the location-distribution problem. The proposed model considers the role of demand in an urban area, which might be affected by neighbor wards. Integrating decision-making process for a disaster helps to improve a better relief operation during response phase of disaster management cycle. In the proposed approach, a proactive damage estimation method is used to estimate demands for the district based on worst-case scenario of earthquake in Tehran. Since such model is designed for an entire urban district, it is considered to be a large-scale mixed integer problem and hence, a genetic algorithm is developed to solve the model.

  15. Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics

    Science.gov (United States)

    Farhat, Charbel

    1997-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.

  16. Documenting the NASA Armstrong Flight Research Center Oblate Earth Simulation Equations of Motion and Integration Algorithm

    Science.gov (United States)

    Clarke, R.; Lintereur, L.; Bahm, C.

    2016-01-01

    A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.

  17. Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method

    Directory of Open Access Journals (Sweden)

    Muneki Yasuda

    2018-04-01

    Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.

  18. Design of Optimal Proportional Integral Derivative Based Power System Stabilizer Using Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Dhanesh K. Sambariya

    2016-01-01

    Full Text Available The design of a proportional, derivative, and integral (PID based power system stabilizer (PSS is carried out using the bat algorithm (BA. The design of proposed PID controller is considered with an objective function based on square error minimization to enhance the small signal stability of nonlinear power system for a wide range of operating conditions. Three benchmark power system models as single-machine infinite-bus (SMIB power system, two-area four-machine ten-bus power system, and IEEE New England ten-machine thirty-nine-bus power system are considered to examine the effectiveness of the designed controller. The BA optimized PID based PSS (BA-PID-PSS controller is applied to these benchmark systems, and the performance is compared with controllers reported in literature. The robustness is tested by considering eight plant conditions of each system, representing the wide range of operating conditions. It includes unlike loading conditions and system configurations to establish the superior performance with BA-PID-PSS over-the-counter controllers.

  19. The Novel Artificial Intelligence Based Sub-Surface Inclusion Detection Device and Algorithm

    Directory of Open Access Journals (Sweden)

    Jong-Ha LEE

    2017-05-01

    Full Text Available We design, implement, and test a novel tactile elasticity imaging sensor to detect the elastic modulus of a contacted object. Emulating a human finger, a multi-layer polydimethylsiloxane waveguide has been fabricated as the sensing probe. The light is illuminated under the critical angle to totally reflect within the flexible and transparent waveguide. When a waveguide is compressed by an object, the contact area of the waveguide deforms and causes the light to scatter. The scattered light is captured by a high resolution camera. Multiple images are taken from slightly different loading values. The distributed forces have been estimated using the integrated pixel values of diffused lights. The displacements of the contacted object deformation have been estimated by matching the series of tactile images. For this purpose, a novel pattern matching algorithm is developed. The salient feature of this sensor is that it is capable of measuring the absolute elastic modulus value of soft materials without additional measurement units. The measurements were validated by comparing the measured elasticity of the commercial rubber samples with the known elasticity. The evaluation results showed that this type of sensor can measure elasticity within ±5.38 %.

  20. Numerical thermal analysis and optimization of multi-chip LED module using response surface methodology and genetic algorithm

    NARCIS (Netherlands)

    Tang, Hong Yu; Ye, Huai Yu; Chen, Xian Ping; Qian, Cheng; Fan, Xue Jun; Zhang, G.Q.

    2017-01-01

    In this paper, the heat transfer performance of the multi-chip (MC) LED module is investigated numerically by using a general analytical solution. The configuration of the module is optimized with genetic algorithm (GA) combined with a response surface methodology. The space between chips, the

  1. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    Science.gov (United States)

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  2. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  3. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    Science.gov (United States)

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  4. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm.

    Science.gov (United States)

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd

    2015-09-15

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.

  5. An improved data integration algorithm to constrain the 3D displacement field induced by fast deformation phenomena tested on the Napa Valley earthquake

    Science.gov (United States)

    Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore

    2017-12-01

    In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.

  6. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    Science.gov (United States)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  7. Integrated CLOS and PN Guidance for Increased Effectiveness of Surface to Air Missiles

    Directory of Open Access Journals (Sweden)

    Binte Fatima Tuz ZAHRA

    2017-06-01

    Full Text Available In this paper, a novel approach has been presented to integrate command to line-of-sight (CLOS guidance and proportional navigation (PN guidance in order to reduce miss distance and to increase the effectiveness of surface to air missiles. Initially a comparison of command to line-of-sight guidance and proportional navigation has been presented. Miss distance, variation of angle-of-attack, normal and lateral accelerations and error of missile flight path from direct line-of-sight have been used as noteworthy criteria for comparison of the two guidance laws. Following this comparison a new approach has been proposed for determining the most suitable guidance gains in order to minimize miss distance and improve accuracy of the missile in delivering the warhead, while using CLOS guidance. This proposed technique is based on constrained nonlinear minimization to optimize the guidance gains. CLOS guidance has a further limitation of significant increase in normal and lateral acceleration demands during the terminal phase of missile flight. Furthermore, at large elevation angles, the required angle-of-attack during the terminal phase increases beyond design specifications. Subsequently, a missile with optical sensors only and following just the CLOS guidance has less likelihood to hit high speed targets beyond 45º in elevation plane. A novel approach has thus been proposed to overcome such limitations of CLOS-only guidance for surface to air missiles. In this approach, an integrated guidance algorithm has been proposed whereby the initial guidance law during rocket motor burnout phase remains CLOS, whereas immediately after this phase, the guidance law is automatically switched to PN guidance. This integrated approach has not only resulted in slight increase in range of the missile but also has significantly improved its likelihood to hit targets beyond 30 degrees in elevation plane, thus successfully overcoming various limitations of CLOS

  8. Transient analysis of electromagnetic wave interactions on plasmonic nanostructures using a surface integral equation solver

    KAUST Repository

    Uysal, Ismail Enes

    2016-08-09

    Transient electromagnetic interactions on plasmonic nanostructures are analyzed by solving the Poggio-Miller-Chan-Harrington-Wu-Tsai (PMCHWT) surface integral equation (SIE). Equivalent (unknown) electric and magnetic current densities, which are introduced on the surfaces of the nanostructures, are expanded using Rao-Wilton-Glisson and polynomial basis functions in space and time, respectively. Inserting this expansion into the PMCHWT-SIE and Galerkin testing the resulting equation at discrete times yield a system of equations that is solved for the current expansion coefficients by a marching on-in-time (MOT) scheme. The resulting MOT-PMCHWT-SIE solver calls for computation of additional convolutions between the temporal basis function and the plasmonic medium\\'s permittivity and Green function. This computation is carried out with almost no additional cost and without changing the computational complexity of the solver. Time-domain samples of the permittivity and the Green function required by these convolutions are obtained from their frequency-domain samples using a fast relaxed vector fitting algorithm. Numerical results demonstrate the accuracy and applicability of the proposed MOT-PMCHWT solver. © 2016 Optical Society of America.

  9. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  10. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  11. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, Andrea [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Sen, Ramazan Sonat [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  12. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  13. Substrate integrated ferrite phase shifters and active frequency selective surfaces

    International Nuclear Information System (INIS)

    Cahill, B.M.

    2002-01-01

    There are two distinct parts to this thesis; the first investigates the use of ferrite tiles in the construction of printed phase shifting transmission lines, culminating in the design of two compact electromagnetic controlled beam steered patch and slot antenna arrays. The second part investigates the use of active frequency selective surfaces (AFSS), which are later used to cover a uPVC constructed enclosure. Field intensity measurements are taken from within the enclosure to determine the dynamic screening effectiveness. Trans Tech G-350 Ferrite is investigated to determine its application in printed microstrip and stripline phase shifting transmission lines. 50-Ohm transmission lines are constructed using the ferrite tile and interfaced to Rogers RT Duroid 5870 substrate. Scattering parameter measurements are made under the application of variable magnetic fields to the ferrite. Later, two types of planar microwave beam steering antennas are constructed. The first uses the ferrites integrated into the Duroid as microstrip lines with 3 patch antennas as the radiating elements. The second uses stripline transmission lines, with slot antennas as the radiating sources etched into the ground plane of the triplate. Beam steering is achieved by the application of an external electromagnet. An AFSS is constructed by the interposition of PIN diodes into a dipole FSS array. Transmission response measurements are then made for various angles of electromagnetic wave incidence. Two states of operation exist: when a current is passed through the diodes and when the diodes are switched off. These two states form a high pass and band stop space filter respectively. An enclosure covered with the AFSS is constructed and externally illuminated in the range 2.0 - 2.8GHz. A probe antenna inside the enclosure positioned at various locations through out the volume is used to establish the effective screening action of the AFSS in 3 dimensional space. (author)

  14. A Case Study on Maximizing Aqua Feed Pellet Properties Using Response Surface Methodology and Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya

    2013-01-10

    Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio, barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated

  15. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    Science.gov (United States)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  16. Dynamic Water Surface Detection Algorithm Applied on PROBA-V Multispectral Data

    Directory of Open Access Journals (Sweden)

    Luc Bertels

    2016-12-01

    Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.

  17. Application of quantum-inspired binary gravitational search algorithm for thermal unit commitment with wind power integration

    International Nuclear Information System (INIS)

    Ji, Bin; Yuan, Xiaohui; Li, Xianshan; Huang, Yuehua; Li, Wenwu

    2014-01-01

    Highlights: • Chance constrained programming is used to build UC with wind power model (TUCPW). • Quantum-inspired gravitational search algorithm (QBGSA) is proposed to solve TUCPW. • QBGSA based on priority list is adopted to optimize on/off status of units. • Heuristic search strategy is applied to handle the constraints of TUCPW. • Local mutation adjustment strategy is proposed to improve the performance of QBGSA. - Abstract: As the application of wind power energy is rapidly developing, it is very important to analyze the effects of wind power fluctuation on power system operation. In this paper, a model of thermal unit commitment problem with wind power integration is established and chance constrained programming is applied to simulate the effects of wind power fluctuation. Meanwhile, a combination of quantum-inspired binary gravitational search algorithm and chance constrained programming is proposed to solve the thermal unit commitment problem with wind power integration. In order to reduce the searching time and avoid the premature convergence, a priority list of thermal units and a local mutation adjustment strategy are utilized during the optimization process. The priority list of thermal units is based on the weight between average full-load cost and maximal power output. Then, a stochastic simulation technique is used to deal with the probabilistic constraints. In addition, heuristic search strategies are used to handle deterministic constraints of thermal units. Furthermore, the impacts of different confidence levels and different prediction errors of wind fluctuation on system operation are analyzed respectively. The feasibility and effectiveness of the proposed method are verified by the test system with wind power integration, and the results are compared with those using binary gravitational search algorithm and binary particle swarm optimization. The simulation results demonstrate that the proposed quantum-inspired binary gravitational

  18. Swarm intelligence algorithms for integrated optimization of piezoelectric actuator and sensor placement and feedback gains

    International Nuclear Information System (INIS)

    Dutta, Rajdeep; Ganguli, Ranjan; Mani, V

    2011-01-01

    Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures

  19. Smartphone-Based Indoor Integrated WiFi/MEMS Positioning Algorithm in a Multi-Floor Environment

    Directory of Open Access Journals (Sweden)

    Zengshan Tian

    2015-03-01

    Full Text Available Indoor positioning in a multi-floor environment by using a smartphone is considered in this paper. The positioning accuracy and robustness of WiFi fingerprinting-based positioning are limited due to the unexpected variation of WiFi measurements between floors. On this basis, we propose a novel smartphone-based integrated WiFi/MEMS positioning algorithm based on the robust extended Kalman filter (EKF. The proposed algorithm first relies on the gait detection approach and quaternion algorithm to estimate the velocity and heading angles of the target. Second, the velocity and heading angles, together with the results of WiFi fingerprinting-based positioning, are considered as the input of the robust EKF for the sake of conducting two-dimensional (2D positioning. Third, the proposed algorithm calculates the height of the target by using the real-time recorded barometer and geographic data. Finally, the experimental results show that the proposed algorithm achieves the positioning accuracy with root mean square errors (RMSEs less than 1 m in an actual multi-floor environment.

  20. An Improved Mono-Window Algorithm for Land Surface Temperature Retrieval from Landsat 8 Thermal Infrared Sensor Data

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2015-04-01

    Full Text Available The successful launch of the Landsat 8 satellite with two thermal infrared bands on February 11, 2013, for continuous Earth observation provided another opportunity for remote sensing of land surface temperature (LST. However, calibration notices issued by the United States Geological Survey (USGS indicated that data from the Landsat 8 Thermal Infrared Sensor (TIRS Band 11 have large uncertainty and suggested using TIRS Band 10 data as a single spectral band for LST estimation. In this study, we presented an improved mono-window (IMW algorithm for LST retrieval from the Landsat 8 TIRS Band 10 data. Three essential parameters (ground emissivity, atmospheric transmittance and effective mean atmospheric temperature were required for the IMW algorithm to retrieve LST. A new method was proposed to estimate the parameter of effective mean atmospheric temperature from local meteorological data. The other two essential parameters could be both estimated through the so-called land cover approach. Sensitivity analysis conducted for the IMW algorithm revealed that the possible error in estimating the required atmospheric water vapor content has the most significant impact on the probable LST estimation error. Under moderate errors in both water vapor content and ground emissivity, the algorithm had an accuracy of ~1.4 K for LST retrieval. Validation of the IMW algorithm using the simulated datasets for various situations indicated that the LST difference between the retrieved and the simulated ones was 0.67 K on average, with an RMSE of 0.43 K. Comparison of our IMW algorithm with the single-channel (SC algorithm for three main atmosphere profiles indicated that the average error and RMSE of the IMW algorithm were −0.05 K and 0.84 K, respectively, which were less than the −2.86 K and 1.05 K of the SC algorithm. Application of the IMW algorithm to Nanjing and its vicinity in east China resulted in a reasonable LST estimation for the region. Spatial

  1. Testing and tuning symplectic integrators for the hybrid Monte Carlo algorithm in lattice QCD

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya; Forcrand, Philippe de

    2006-01-01

    We examine a new second-order integrator recently found by Omelyan et al. The integration error of the new integrator measured in the root mean square of the energy difference, 2 > 1/2 , is about 10 times smaller than that of the standard second-order leapfrog (2LF) integrator. As a result, the step size of the new integrator can be made about three times larger. Taking into account a factor 2 increase in cost, the new integrator is about 50% more efficient than the 2LF integrator. Integrating over positions first, then momenta, is slightly more advantageous than the reverse. Further parameter tuning is possible. We find that the optimal parameter for the new integrator is slightly different from the value obtained by Omelyan et al., and depends on the simulation parameters. This integrator could also be advantageous for the Trotter-Suzuki decomposition in quantum Monte Carlo

  2. Localization of accessory pathway in patients with wolff-parkinson-white syndrome from surface ecg using arruda algorithm

    International Nuclear Information System (INIS)

    Saidullah, S.; Shah, B.

    2016-01-01

    Background: To ablate accessory pathway successfully and conveniently, accurate localization of the pathway is needed. Electrophysiologists use different algorithms before taking the patients to the electrophysiology (EP) laboratory to plan the intervention accordingly. In this study, we used Arruda algorithm to locate the accessory pathway. The objective of the study was to determine the accuracy of the Arruda algorithm for locating the pathway on surface ECG. Methods: It was a cross-sectional observational study conducted from January 2014 to January 2016 in the electrophysiology department of Hayat Abad Medical Complex Peshawar Pakistan. A total of fifty nine (n=59) consecutive patients of both genders between age 14-60 years presented with WPW syndrome (Symptomatic tachycardia with delta wave on surface ECG) were included in the study. Patient's electrocardiogram (ECG) before taking patients to laboratory was analysed on Arruda algorithm. Standard four wires protocol was used for EP study before ablation. Once the findings were confirmed the pathway was ablated as per standard guidelines. Results: A total of fifty nine (n=59) patients between the age 14-60 years were included in the study. Cumulative mean age was 31.5 years ± 12.5 SD. There were 56.4% (n=31) males with mean age 28.2 years ± 10.2 SD and 43.6% (n=24) were females with mean age 35.9 years ± 14.0 SD. Arruda algorithm was found to be accurate in predicting the exact accessory pathway (AP) in 83.6% (n=46) cases. Among all inaccurate predictions (n=9), Arruda inaccurately predicted two third (n=6; 66.7%) pathways towards right side (right posteroseptal, right posterolateral and right antrolateral). Conclusion: Arruda algorithm was found highly accurate in predicting accessory pathway before ablation. (author)

  3. Algorithms and their Impact on Integrated Vehicle Health Management - Chapter 7

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter discussed some of the algorithmic choices one encounters when designing an IVHM system. While it would be generally desirable to be able to pick a...

  4. Using neural networks and Dyna algorithm for integrated planning, reacting and learning in systems

    Science.gov (United States)

    Lima, Pedro; Beard, Randal

    1992-01-01

    The traditional AI answer to the decision making problem for a robot is planning. However, planning is usually CPU-time consuming, depending on the availability and accuracy of a world model. The Dyna system generally described in earlier work, uses trial and error to learn a world model which is simultaneously used to plan reactions resulting in optimal action sequences. It is an attempt to integrate planning, reactive, and learning systems. The architecture of Dyna is presented. The different blocks are described. There are three main components of the system. The first is the world model used by the robot for internal world representation. The input of the world model is the current state and the action taken in the current state. The output is the corresponding reward and resulting state. The second module in the system is the policy. The policy observes the current state and outputs the action to be executed by the robot. At the beginning of program execution, the policy is stochastic and through learning progressively becomes deterministic. The policy decides upon an action according to the output of an evaluation function, which is the third module of the system. The evaluation function takes the following as input: the current state of the system, the action taken in that state, the resulting state, and a reward generated by the world which is proportional to the current distance from the goal state. Originally, the work proposed was as follows: (1) to implement a simple 2-D world where a 'robot' is navigating around obstacles, to learn the path to a goal, by using lookup tables; (2) to substitute the world model and Q estimate function Q by neural networks; and (3) to apply the algorithm to a more complex world where the use of a neural network would be fully justified. In this paper, the system design and achieved results will be described. First we implement the world model with a neural network and leave Q implemented as a look up table. Next, we use a

  5. Classifying terrestrial surface water systems using integrated residence time

    Science.gov (United States)

    Jones, Allan; Hodges, Ben; McClelland, James; Hardison, Amber; Moffett, Kevan

    2017-04-01

    Linkages between ecology and hydrology in terrestrial surface water often invoke a discussion of lentic (reservoir) vs. lotic (riverine) system behaviors. However, the literature shows a wide range of thresholds separating lentic/lotic regimes and little agreement on a quantitative, repeatable classification metric that can be broadly and reliably applied across a range of systems hosting various flow regimes and suspended/benthic taxa. We propose an integrated Residence Time (iTR) metric as part of a new Freshwater Continuum Classification (FCC) to address this issue. The iTR is computed as the transit time of a water parcel across a system given observed temporal variations in discharge and volume, which creates a temporally-varying metric applicable across a defined system length. This approach avoids problems associated with instantaneous residence times or average residence times that can lead to misleading characterizations in seasonally- or episodically-dynamic systems. The iTR can be directly related to critical flow thresholds and timescales of ecology (e.g., zooplankton growth). The FCC approach considers lentic and lotic to be opposing end-members of a classification continuum and also defines intermediate regimes that blur the line between the two ends of the spectrum due to more complex hydrological system dynamics. We also discover the potential for "oscillic" behavior, where a system switches between lentic and lotic classifications either episodically or regularly (e.g., seasonally). Oscillic behavior is difficult to diagnose with prior lentic/lotic classification schemes, but can be readily identified using iTR. The FCC approach was used to analyze 15 tidally-influenced river segments along the Texas (USA) coast of the Gulf of Mexico. The results agreed with lentic/lotic designations using prior approaches, but also identified more nuanced intermediate and oscillic regimes. Within this set of systems, the oscillic nature of some of the river

  6. [Quantitative analysis of thiram by surface-enhanced raman spectroscopy combined with feature extraction Algorithms].

    Science.gov (United States)

    Zhang, Bao-hua; Jiang, Yong-cheng; Sha, Wen; Zhang, Xian-yi; Cui, Zhi-feng

    2015-02-01

    Three feature extraction algorithms, such as the principal component analysis (PCA), the discrete cosine transform (DCT) and the non-negative factorization (NMF), were used to extract the main information of the spectral data in order to weaken the influence of the spectral fluctuation on the subsequent quantitative analysis results based on the SERS spectra of the pesticide thiram. Then the extracted components were respectively combined with the linear regression algorithm--the partial least square regression (PLSR) and the non-linear regression algorithm--the support vector machine regression (SVR) to develop the quantitative analysis models. Finally, the effect of the different feature extraction algorithms on the different kinds of the regression algorithms was evaluated by using 5-fold cross-validation method. The experiments demonstrate that the analysis results of SVR are better than PLSR for the non-linear relationship between the intensity of the SERS spectrum and the concentration of the analyte. Further, the feature extraction algorithms can significantly improve the analysis results regardless of the regression algorithms which mainly due to extracting the main information of the source spectral data and eliminating the fluctuation. Additionally, PCA performs best on the linear regression model and NMF is best on the non-linear model, and the predictive error can be reduced nearly three times in the best case. The root mean square error of cross-validation of the best regression model (NMF+SVR) is 0.0455 micormol x L(-1) (10(-6) mol x L(-1)), and it attains the national detection limit of thiram, so the method in this study provides a novel method for the fast detection of thiram. In conclusion, the study provides the experimental references the selecting the feature extraction algorithms on the analysis of the SERS spectrum, and some common findings of feature extraction can also help processing of other kinds of spectroscopy.

  7. Time-domain Helmholtz-Kirchhoff integral for surface scattering in a refractive medium.

    Science.gov (United States)

    Choo, Youngmin; Song, H C; Seong, Woojae

    2017-03-01

    The time-domain Helmholtz-Kirchhoff (H-K) integral for surface scattering is derived for a refractive medium, which can handle shadowing effects. The starting point is the H-K integral in the frequency domain. In the high-frequency limit, the Green's function can be calculated by ray theory, while the normal derivative of the incident pressure from a point source is formulated using the ray geometry and ray-based Green's function. For a corrugated pressure-release surface, a stationary phase approximation can be applied to the H-K integral, reducing the surface integral to a line integral. Finally, a computationally-efficient, time-domain H-K integral is derived using an inverse Fourier transform. A broadband signal scattered from a sinusoidal surface in an upwardly refracting medium is evaluated with and without geometric shadow corrections, and compared to the result from a conventional ray model.

  8. Effect of different machining processes on the tool surface integrity and fatigue life

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Chuan Liang [College of Mechanical and Electrical Engineering, Nanchang University, Nanchang (China); Zhang, Xianglin [School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan (China)

    2016-08-15

    Ultra-precision grinding, wire-cut electro discharge machining and lapping are often used to machine the tools in fine blanking industry. And the surface integrity from these machining processes causes great concerns in the research field. To study the effect of processing surface integrity on the fine blanking tool life, the surface integrity of different tool materials under different processing conditions and its influence on fatigue life were thoroughly analyzed in the present study. The result shows that the surface integrity of different materials was quite different on the same processing condition. For the same tool material, the surface integrity on varying processing conditions was quite different too and deeply influenced the fatigue life.

  9. A Novel Control Algorithm for Integration of Active and Passive Vehicle Safety Systems in Frontal Collisions

    Directory of Open Access Journals (Sweden)

    Daniel Wallner

    2010-10-01

    Full Text Available The present paper investigates an approach to integrate active and passive safety systems of passenger cars. Worldwide, the introduction of Integrated Safety Systems and Advanced Driver Assistance Systems (ADAS is considered to continue the today

  10. Algorithms and Fortran programs to calculate quantum collision integrals for realistic intermolecular potentials

    International Nuclear Information System (INIS)

    Taylor, W.L.

    1986-01-01

    In this report quantum mechanical expressions and computer codes are given to calculate the transport collision integral. The approach is by numerical integration of the Schroedinger equation to obtain quantum phase shifts with subsequent summation of the phase shifts to obtain quantum cross sections. Once the quantum cross sections are evaluated, the transport collision integrals, the viscosity, diffusion, thermal conductivity, and the thermal diffusion factor may be calculated by successive numerical integrations. 8 refs

  11. Integration Over Curves and Surfaces Defined by the Closest Point Mapping

    Science.gov (United States)

    2015-04-01

    3 Numerical simulations In this section we investigate the convergence of our numerical integration using simple Riemann sum over uniform Cartesian...be considered integration of functions defined on suitable hypercubes, periodically extended. In such settings, simple Riemann sums on Cartesian grids... Integration over curves and surfaces defined by the closest point mapping Catherine Kublik∗ and Richard Tsai† Abstract We propose a new formulation

  12. Assessment of Wind Turbine Structural Integrity using Response Surface Methodology

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Svenningsen, Lasse; Moser, Wolfgang

    2016-01-01

    Highlights •A new approach to assessment of site specific wind turbine loads is proposed. •The approach can be applied in both fatigue and ultimate limit state. •Two different response surface methodologies have been investigated. •The model uncertainty introduced by the response surfaces is dete...

  13. The fuzzy TOPSIS and generalized Choquet fuzzy integral algorithm for nuclear power plant site selection - a case study from Turkey

    International Nuclear Information System (INIS)

    Kurt, Ünal

    2014-01-01

    The location selection for nuclear power plant (NPP) is a strategic decision, which has significant impact on the economic operation of the plant and sustainable development of the region. This paper proposes fuzzy TOPSIS and generalized Choquet fuzzy integral algorithm for evaluation and selection of optimal locations for NPP in Turkey. Many sub-criteria such as geological, social, touristic, transportation abilities, cooling water capacity and nearest to consumptions markets are taken into account. Among the evaluated locations, according to generalized Choquet fuzzy integral method, Inceburun–Sinop was selected as a study site due to its highest performance and meeting most of the investigated criteria. The Inceburun-Sinop is selected by generalized Choquet fuzzy integral and fuzzy TOPSIS Iğneada–Kırklareli took place in the first turn. The Mersin–Akkuyu is not selected in both methods. (author)

  14. Generation of synthetic surface electromyography signals under fatigue conditions for varying force inputs using feedback control algorithm.

    Science.gov (United States)

    Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S

    2017-11-01

    Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.

  15. Testing and tuning new symplectic integrators for Hybrid Monte Carlo algorithm in lattice QCD

    CERN Document Server

    Takaishi, T; Takaishi, Tetsuya; Forcrand, Philippe de

    2006-01-01

    We examine a new 2nd order integrator recently found by Omelyan et al. The integration error of the new integrator measured in the root mean square of the energy difference, $\\bra\\Delta H^2\\ket^{1/2}$, is about 10 times smaller than that of the standard 2nd order leapfrog (2LF) integrator. As a result, the step size of the new integrator can be made about three times larger. Taking into account a factor 2 increase in cost, the new integrator is about 50% more efficient than the 2LF integrator. Integrating over positions first, then momenta, is slightly more advantageous than the reverse. Further parameter tuning is possible. We find that the optimal parameter for the new integrator is slightly different from the value obtained by Omelyan et al., and depends on the simulation parameters. This integrator, together with a new 4th order integrator, could also be advantageous for the Trotter-Suzuki decomposition in Quantum Monte Carlo.

  16. Performance measurement, modeling, and evaluation of integrated concurrency control and recovery algorithms in distributed data base systems

    Energy Technology Data Exchange (ETDEWEB)

    Jenq, B.C.

    1986-01-01

    The performance evaluation of integrated concurrency-control and recovery mechanisms for distributed data base systems is studied using a distributed testbed system. In addition, a queueing network model was developed to analyze the two phase locking scheme in the distributed testbed system. The combination of testbed measurement and analytical modeling provides an effective tool for understanding the performance of integrated concurrency control and recovery algorithms in distributed database systems. The design and implementation of the distributed testbed system, CARAT, are presented. The concurrency control and recovery algorithms implemented in CARAT include: a two phase locking scheme with distributed deadlock detection, a distributed version of optimistic approach, before-image and after-image journaling mechanisms for transaction recovery, and a two-phase commit protocol. Many performance measurements were conducted using a variety of workloads. A queueing network model is developed to analyze the performance of the CARAT system using the two-phase locking scheme with before-image journaling. The combination of testbed measurements and analytical modeling provides significant improvements in understanding the performance impacts of the concurrency control and recovery algorithms in distributed database systems.

  17. A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2015-03-01

    Full Text Available This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O measurements simulated using a pedestrian dead reckoning (PDR approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF, taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI building on the China University of Mining and Technology (CUMT campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  18. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    Science.gov (United States)

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-03-24

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  19. Integration of Architectural and Cytologic Driven Image Algorithms for Prostate Adenocarcinoma Identification

    Science.gov (United States)

    Hipp, Jason; Monaco, James; Kunju, L. Priya; Cheng, Jerome; Yagi, Yukako; Rodriguez-Canales, Jaime; Emmert-Buck, Michael R.; Hewitt, Stephen; Feldman, Michael D.; Tomaszewski, John E.; Toner, Mehmet; Tompkins, Ronald G.; Flotte, Thomas; Lucas, David; Gilbertson, John R.; Madabhushi, Anant; Balis, Ulysses

    2012-01-01

    Introduction: The advent of digital slides offers new opportunities within the practice of pathology such as the use of image analysis techniques to facilitate computer aided diagnosis (CAD) solutions. Use of CAD holds promise to enable new levels of decision support and allow for additional layers of quality assurance and consistency in rendered diagnoses. However, the development and testing of prostate cancer CAD solutions requires a ground truth map of the cancer to enable the generation of receiver operator characteristic (ROC) curves. This requires a pathologist to annotate, or paint, each of the malignant glands in prostate cancer with an image editor software - a time consuming and exhaustive process. Recently, two CAD algorithms have been described: probabilistic pairwise Markov models (PPMM) and spatially-invariant vector quantization (SIVQ). Briefly, SIVQ operates as a highly sensitive and specific pattern matching algorithm, making it optimal for the identification of any epithelial morphology, whereas PPMM operates as a highly sensitive detector of malignant perturbations in glandular lumenal architecture. Methods: By recapitulating algorithmically how a pathologist reviews prostate tissue sections, we created an algorithmic cascade of PPMM and SIVQ algorithms as previously described by Doyle el al. [1] where PPMM identifies the glands with abnormal lumenal architecture, and this area is then screened by SIVQ to identify the epithelium. Results: The performance of this algorithm cascade was assessed qualitatively (with the use of heatmaps) and quantitatively (with the use of ROC curves) and demonstrates greater performance in the identification of malignant prostatic epithelium. Conclusion: This ability to semi-autonomously paint nearly all the malignant epithelium of prostate cancer has immediate applications to future prostate cancer CAD development as a validated ground truth generator. In addition, such an approach has potential applications as a

  20. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  1. A Hierarchical Algorithm for Integrated Scheduling and Control With Applications to Power Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dinesen, Peter Juhler; Jørgensen, John Bagterp

    2016-01-01

    in the optimal control problem (OCP). The scheduling decisions are made on a slow time scale compared with the system dynamics. This gives rise to a temporal separation of the scheduling and control variables in the OCP. Accordingly, the proposed hierarchical algorithm consists of two optimization levels...... portfolio case study show that the hierarchical algorithm reduces the computation to solve the OCP by several orders of magnitude. The improvement in computation time is achieved without a significant increase in the overall cost of operation....

  2. Integrability of Liouville system on high genus Riemann surface: Pt. 1

    International Nuclear Information System (INIS)

    Chen Yixin; Gao Hongbo

    1992-01-01

    By using the theory of uniformization of Riemann-surfaces, we study properties of the Liouville equation and its general solution on a Riemann surface of genus g>1. After obtaining Hamiltonian formalism in terms of free fields and calculating classical exchange matrices, we prove the classical integrability of Liouville system on high genus Riemann surface

  3. The application of quadtree algorithm for information integration in the high-level radioactive waste geological disposal

    International Nuclear Information System (INIS)

    Gao Min; Zhong Xia; Huang Shutao

    2008-01-01

    A multi-source database for high-level radioactive waste geological disposal, aims to promote the information process of the geological of HLW. In the periods of the multi-dimensional and multi-source and the integration of information and applications, it also relates to computer software and hardware, the paper preliminary analysises the data resources Beishan area, Gansu Province. The paper introduces a theory based on GIS technology and methods and open source code GDAL application, at the same time, it discusses the technical methods how to finish the application of the Quadtree algorithm in the area of information resources management system, fully sharing, rapid retrieval and so on. A more detailed description of the characteristics of existing data resources, space-related data retrieval algorithm theory, programming design and implementation of ideas are showed in the paper. (authors)

  4. An integrated approach to friction surfacing process optimisation

    OpenAIRE

    Voutchkov, I.I.; Jaworski, B.; Vitanov, V.I.; Bedford, G.M.

    2001-01-01

    This paper discusses the procedures for data collection, management and optimisation of the friction surfacing process. Experimental set-up and characteristics of measuring equipment are found to match the requirements for accurate and unbiased data signals. The main friction surfacing parameters are identified and the first stage of the optimisation process is achieved by visually assessing the coatings and introducing the substrate speed vs. force map. The optimum values from this first sta...

  5. Realisation of complex precast concrete structures through the integration of algorithmic design and novel fabrication techniques

    DEFF Research Database (Denmark)

    Larsen, Niels Martin; Egholm Pedersen, Ole; Pigram, Dave

    2012-01-01

    . This involves consideration of the relations between geometry and technique, as well as the use of form-finding and simulation algorithms for shaping and optimising the shape of the structure. Custom-made scripts embedded in 3D-modeling tools were used for producing the information necessary for realising...

  6. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  7. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    Science.gov (United States)

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-12-08

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.

  8. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    Science.gov (United States)

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  9. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

    Science.gov (United States)

    Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

  10. Some algorithms for numerical quadrature using the derivatives of the integrand in the integration interval

    CERN Document Server

    Håvie, T.

    1970-01-01

    Some quadrature formulae using the derivatives of the integrand are discussed. As special cases are obtained generalizations of both the ordinary and the modified Romberg algorithms. In all cases the error terms are expressed in terms of Bernoulli polynomials and functions.

  11. TRFBA: an algorithm to integrate genome-scale metabolic and transcriptional regulatory networks with incorporation of expression data.

    Science.gov (United States)

    Motamedian, Ehsan; Mohammadi, Maryam; Shojaosadati, Seyed Abbas; Heydari, Mona

    2017-04-01

    Integration of different biological networks and data-types has been a major challenge in systems biology. The present study introduces the transcriptional regulated flux balance analysis (TRFBA) algorithm that integrates transcriptional regulatory and metabolic models using a set of expression data for various perturbations. TRFBA considers the expression levels of genes as a new continuous variable and introduces two new linear constraints. The first constraint limits the rate of reaction(s) supported by a metabolic gene using a constant parameter (C) that converts the expression levels to the upper bounds of the reactions. Considering the concept of constraint-based modeling, the second set of constraints correlates the expression level of each target gene with that of its regulating genes. A set of constraints and binary variables was also added to prevent the second set of constraints from overlapping. TRFBA was implemented on Escherichia coli and Saccharomyces cerevisiae models to estimate growth rates under various environmental and genetic perturbations. The error sensitivity to the algorithm parameter was evaluated to find the best value of C. The results indicate a significant improvement in the quantitative prediction of growth in comparison with previously presented algorithms. The robustness of the algorithm to change in the expression data and the regulatory network was tested to evaluate the effect of noisy and incomplete data. Furthermore, the use of added constraints for perturbations without their gene expression profile demonstrates that these constraints can be applied to improve the growth prediction of FBA. TRFBA is implemented in Matlab software and requires COBRA toolbox. Source code is freely available at http://sbme.modares.ac.ir . : motamedian@modares.ac.ir. Supplementary data are available at Bioinformatics online.

  12. NASA/GEWEX Surface Radiation Budget: Integrated Data Product With Reprocessed Radiance, Cloud, and Meteorology Inputs, and New Surface Albedo Treatment

    Science.gov (United States)

    Cox, Stephen J.; Stackhouse, Paul W., Jr.; Gupta, Shashi K.; Mikovitz, J. Colleen; Zhang, Taiping

    2016-01-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current release 3.0 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number will allow SRB a higher resolution gridded product (e.g. 0.5 degree), as well as the production of pixel-level fluxes. In addition to the input data improvements, several important algorithm improvements have been made. Most notable has been the adaptation of Angular Distribution Models (ADMs) from CERES to improve the initial calculation of shortwave TOA fluxes, from which the surface flux calculations follow. Other key input improvements include a detailed aerosol history using the Max Planck Institut Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data, the various other improved input data sets and the incorporation of many additional internal SRB model improvements. As of the time of abstract submission, results from 2007 have been produced with ISCCP H availability the limiting factor. More SRB data will be produced as ISCCP reprocessing continues. The SRB data produced will be released as part of the Release 4.0 Integrated Product, recognizing the interdependence of the radiative fluxes with other GEWEX products providing estimates of the Earth's global water and energy cycle (I.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).

  13. A diagnostic assessment of evolutionary algorithms for multi-objective surface water reservoir control

    Science.gov (United States)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Herman, Jonathan D.; Giuliani, Matteo; Castelletti, Andrea

    2016-06-01

    Globally, the pressures of expanding populations, climate change, and increased energy demands are motivating significant investments in re-operationalizing existing reservoirs or designing operating policies for new ones. These challenges require an understanding of the tradeoffs that emerge across the complex suite of multi-sector demands in river basin systems. This study benchmarks our current capabilities to use Evolutionary Multi-Objective Direct Policy Search (EMODPS), a decision analytic framework in which reservoirs' candidate operating policies are represented using parameterized global approximators (e.g., radial basis functions) then those parameterized functions are optimized using multi-objective evolutionary algorithms to discover the Pareto approximate operating policies. We contribute a comprehensive diagnostic assessment of modern MOEAs' abilities to support EMODPS using the Conowingo reservoir in the Lower Susquehanna River Basin, Pennsylvania, USA. Our diagnostic results highlight that EMODPS can be very challenging for some modern MOEAs and that epsilon dominance, time-continuation, and auto-adaptive search are helpful for attaining high levels of performance. The ɛ-MOEA, the auto-adaptive Borg MOEA, and ɛ-NSGAII all yielded superior results for the six-objective Lower Susquehanna benchmarking test case. The top algorithms show low sensitivity to different MOEA parameterization choices and high algorithmic reliability in attaining consistent results for different random MOEA trials. Overall, EMODPS poses a promising method for discovering key reservoir management tradeoffs; however algorithmic choice remains a key concern for problems of increasing complexity.

  14. Extending the inverse scattering series free-surface-multiple-elimination algorithm by accommodating the source property on data with interfering or proximal seismic events

    Science.gov (United States)

    Zhao, Lei; Yang, Jinlong; Weglein, Arthur B.

    2017-12-01

    The inverse scattering series free-surface-multiple-elimination (FSME) algorithm is modified and extended to accommodate the source property-source radiation pattern. That accommodation can provide additional value for the fidelity of the free-surface multiple predictions. The new extended FSME algorithm retains all the merits of the original algorithm, i.e., fully data-driven and with a requirement of no subsurface information. It is tested on a one-dimensional acoustic model with proximal and interfering seismic events, such as interfering primaries and multiples. The results indicate the new extended FSME algorithm can predict more accurate free-surface multiples than methods without the accommodation of the source property if the source has a radiation pattern. This increased effectiveness in prediction contributes to removing free-surface multiples without damaging primaries. It is important in such cases to increase predictive effectiveness because other prediction methods, such as the surface-related-multiple-elimination algorithm, has difficulties and problems in prediction accuracy, and those issues affect efforts to remove multiples through adaptive subtraction. Therefore accommodation of the source property can not only improve the effectiveness of the FSME algorithm, but also extend the method beyond the current algorithm (e.g. improving the internal multiple attenuation algorithm).

  15. Inversion of gravity and gravity gradiometry data for density contrast surfaces using Cauchy-type integrals

    DEFF Research Database (Denmark)

    Zhdanov, Michael; Cai, Hongzhu

    2014-01-01

    We introduce a new method of modeling and inversion of potential field data generated by a density contrast surface. Our method is based on 3D Cauchy-type integral representation of the potential fields. Traditionally, potential fields are calculated using volume integrals of the domains occupied...... by anomalous masses subdivided into prismatic cells. This discretization is computationally expensive, especially in a 3D case. The Cauchy-type integral technique makes it possible to represent the gravity field and its gradients as surface integrals. This is especially significant in the solution of problems...

  16. A satellite digital controller or 'play that PID tune again, Sam'. [Position, Integral, Derivative feedback control algorithm for design strategy

    Science.gov (United States)

    Seltzer, S. M.

    1976-01-01

    The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.

  17. A Scalable Neuro-inspired Robot Controller Integrating a Machine Learning Algorithm and a Spiking Cerebellar-like Network

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Lund, Henrik Hautop

    2017-01-01

    Combining Fable robot, a modular robot, with a neuroinspired controller, we present the proof of principle of a system that can scale to several neurally controlled compliant modules. The motor control and learning of a robot module are carried out by a Unit Learning Machine (ULM) that embeds...... the Locally Weighted Projection Regression algorithm (LWPR) and a spiking cerebellar-like microcircuit. The LWPR guarantees both an optimized representation of the input space and the learning of the dynamic internal model (IM) of the robot. However, the cerebellar-like sub-circuit integrates LWPR input...

  18. A Torque Error Compensation Algorithm for Surface Mounted Permanent Magnet Synchronous Machines with Respect to Magnet Temperature Variations

    Directory of Open Access Journals (Sweden)

    Chang-Seok Park

    2017-09-01

    Full Text Available This paper presents a torque error compensation algorithm for a surface mounted permanent magnet synchronous machine (SPMSM through real time permanent magnet (PM flux linkage estimation at various temperature conditions from medium to rated speed. As known, the PM flux linkage in SPMSMs varies with the thermal conditions. Since a maximum torque per ampere look up table, a control method used for copper loss minimization, is developed based on estimated PM flux linkage, variation of PM flux linkage results in undesired torque development of SPMSM drives. In this paper, PM flux linkage is estimated through a stator flux linkage observer and the torque error is compensated in real time using the estimated PM flux linkage. In this paper, the proposed torque error compensation algorithm is verified in simulation and experiment.

  19. Solution of wind integrated thermal generation system for environmental optimal power flow using hybrid algorithm

    Directory of Open Access Journals (Sweden)

    Ambarish Panda

    2016-09-01

    Full Text Available A new evolutionary hybrid algorithm (HA has been proposed in this work for environmental optimal power flow (EOPF problem. The EOPF problem has been formulated in a nonlinear constrained multi objective optimization framework. Considering the intermittency of available wind power a cost model of the wind and thermal generation system is developed. Suitably formed objective function considering the operational cost, cost of emission, real power loss and cost of installation of FACTS devices for maintaining a stable voltage in the system has been optimized with HA and compared with particle swarm optimization algorithm (PSOA to prove its effectiveness. All the simulations are carried out in MATLAB/SIMULINK environment taking IEEE30 bus as the test system.

  20. Edge Detection in UAV Remote Sensing Images Using the Method Integrating Zernike Moments with Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    Liang Huang

    2017-01-01

    Full Text Available Due to the unmanned aerial vehicle remote sensing images (UAVRSI within rich texture details of ground objects and obvious phenomenon, the same objects with different spectra, it is difficult to effectively acquire the edge information using traditional edge detection operator. To solve this problem, an edge detection method of UAVRSI by combining Zernike moments with clustering algorithms is proposed in this study. To begin with, two typical clustering algorithms, namely, fuzzy c-means (FCM and K-means algorithms, are used to cluster the original remote sensing images so as to form homogeneous regions in ground objects. Then, Zernike moments are applied to carry out edge detection on the remote sensing images clustered. Finally, visual comparison and sensitivity methods are adopted to evaluate the accuracy of the edge information detected. Afterwards, two groups of experimental data are selected to verify the proposed method. Results show that the proposed method effectively improves the accuracy of edge information extracted from remote sensing images.

  1. Using Surface Integrals for Checking Archimedes' Law of Buoyancy

    Science.gov (United States)

    Lima, F. M. S.

    2012-01-01

    A mathematical derivation of the force exerted by an "inhomogeneous" (i.e. compressible) fluid on the surface of an "arbitrarily shaped" body immersed in it is not found in the literature, which may be attributed to our trust in Archimedes' law of buoyancy. However, this law, also known as Archimedes' principle (AP), does not yield the force…

  2. Integrating Surface Modeling into the Engineering Design Graphics Curriculum

    Science.gov (United States)

    Hartman, Nathan W.

    2006-01-01

    It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…

  3. A Linear-time Algorithm for Integral Multiterminal Flows in Trees

    OpenAIRE

    Xiao, Mingyu; Nagamochi, Hiroshi

    2016-01-01

    In this paper, we study the problem of finding an integral multiflow which maximizes the sum of flow values between every two terminals in an undirected tree with a nonnegative integer edge capacity and a set of terminals. In general, it is known that the flow value of an integral multiflow is bounded by the cut value of a cut-system which consists of disjoint subsets each of which contains exactly one terminal or has an odd cut value, and there exists a pair of an integral multiflow and a cu...

  4. The Enhanced Locating Performance of an Integrated Cross-Correlation and Genetic Algorithm for Radio Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Yao-Tang Chang

    2014-04-01

    Full Text Available The rapid development of wireless broadband communication technology has affected the location accuracy of worldwide radio monitoring stations that employ time-difference-of-arrival (TDOA location technology. In this study, TDOA-based location technology was implemented in Taiwan for the first time according to International Telecommunications Union Radiocommunication (ITU-R recommendations regarding monitoring and location applications. To improve location accuracy, various scenarios, such as a three-dimensional environment (considering an unequal locating antenna configuration, were investigated. Subsequently, the proposed integrated cross-correlation and genetic algorithm was evaluated in the metropolitan area of Tainan. The results indicated that the location accuracy at a circular error probability of 50% was less than 60 m when a multipath effect was present in the area. Moreover, compared with hyperbolic algorithms that have been applied in conventional TDOA-based location systems, the proposed algorithm yielded 17-fold and 19-fold improvements in the mean difference when the location position of the interference station was favorable and unfavorable, respectively. Hence, the various forms of radio interference, such as low transmission power, burst and weak signals, and metropolitan interference, was proved to be easily identified, located, and removed.

  5. An Improved PDR/Magnetometer/Floor Map Integration Algorithm for Ubiquitous Positioning Using the Adaptive Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2015-11-01

    Full Text Available In this paper, a scheme is presented for fusing a foot-mounted Inertial Measurement Unit (IMU and a floor map to provide ubiquitous positioning in a number of settings, such as in a supermarket as a shopping guide, in a fire emergency service for navigation, or with a hospital patient to be tracked. First, several Zero-Velocity Detection (ZDET algorithms are compared and discussed when used in the static detection of a pedestrian. By introducing information on the Zero Velocity of the pedestrian, fused with a magnetometer measurement, an improved Pedestrian Dead Reckoning (PDR model is developed to constrain the accumulating errors associated with the PDR positioning. Second, a Correlation Matching Algorithm based on map projection (CMAP is presented, and a zone division of a floor map is demonstrated for fusion of the PDR algorithm. Finally, in order to use the dynamic characteristics of a pedestrian’s trajectory, the Adaptive Unscented Kalman Filter (A-UKF is applied to tightly integrate the IMU, magnetometers and floor map for ubiquitous positioning. The results of a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI building on the China University of Mining and Technology (CUMT campus confirm that the proposed scheme can reliably achieve meter-level positioning.

  6. Fuzzy surfaces in GIS and geographical analysis theory, analytical methods, algorithms and applications

    CERN Document Server

    Lodwick, Weldon

    2007-01-01

    Surfaces are a central to geographical analysis. Their generation and manipulation are a key component of geographical information systems (GISs). However, geographical surface data is often not precise. When surfaces are used to model geographical entities, the data inherently contains uncertainty in terms of both position and attribute. Fuzzy Surface in GIS and Geographical Analysis sets out a process to identify the uncertainty in geographic entities. It describes how to successfully obtain, model, analyze, and display data, as well as interpret results within the context of GIS. Focusing on uncertainty that arises from transitional boundaries, the book limits its study to three types of uncertainties: intervals, fuzzy sets, and possibility distributions. The book explains that uncertainty in geographical data typically stems from these three and it is only natural to incorporate them into the analysis and display of surface data. The book defines the mathematics associated with each method for analysis,...

  7. Integrating remotely sensed surface water extent into continental scale hydrology.

    Science.gov (United States)

    Revilla-Romero, Beatriz; Wanders, Niko; Burek, Peter; Salamon, Peter; de Roo, Ad

    2016-12-01

    In hydrological forecasting, data assimilation techniques are employed to improve estimates of initial conditions to update incorrect model states with observational data. However, the limited availability of continuous and up-to-date ground streamflow data is one of the main constraints for large-scale flood forecasting models. This is the first study that assess the impact of assimilating daily remotely sensed surface water extent at a 0.1° × 0.1° spatial resolution derived from the Global Flood Detection System (GFDS) into a global rainfall-runoff including large ungauged areas at the continental spatial scale in Africa and South America. Surface water extent is observed using a range of passive microwave remote sensors. The methodology uses the brightness temperature as water bodies have a lower emissivity. In a time series, the satellite signal is expected to vary with changes in water surface, and anomalies can be correlated with flood events. The Ensemble Kalman Filter (EnKF) is a Monte-Carlo implementation of data assimilation and used here by applying random sampling perturbations to the precipitation inputs to account for uncertainty obtaining ensemble streamflow simulations from the LISFLOOD model. Results of the updated streamflow simulation are compared to baseline simulations, without assimilation of the satellite-derived surface water extent. Validation is done in over 100 in situ river gauges using daily streamflow observations in the African and South American continent over a one year period. Some of the more commonly used metrics in hydrology were calculated: KGE', NSE, PBIAS%, R 2 , RMSE, and VE. Results show that, for example, NSE score improved on 61 out of 101 stations obtaining significant improvements in both the timing and volume of the flow peaks. Whereas the validation at gauges located in lowland jungle obtained poorest performance mainly due to the closed forest influence on the satellite signal retrieval. The conclusion is that

  8. Integrated WiFi/PDR/Smartphone Using an Adaptive System Noise Extended Kalman Filter Algorithm for Indoor Localization

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-02-01

    Full Text Available Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning, which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF.

  9. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  10. Integral methods for shallow free-surface flows with separation

    DEFF Research Database (Denmark)

    Watanabe, S.; Putkaradze, V.; Bohr, Tomas

    2003-01-01

    an inclined plane we take a similar approach and derive a simple model in which the velocity profile is not restricted to a parabolic or self-similar form. Two types of solutions with large surface distortions are found: solitary, kink-like propagating fronts, obtained when the flow rate is suddenly changed......, and stationary jumps, obtained, for instance, behind a sluice gate. We then include time dependence in the model to study the stability of these waves. This allows us to distinguish between sub- and supercritical flows by calculating dispersion relations for wavelengths of the order of the width of the layer....

  11. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  12. Unscented Kalman Filter Algorithm for WiFi-PDR Integrated Indoor Positioning

    Directory of Open Access Journals (Sweden)

    CHEN GuoLiang

    2015-12-01

    Full Text Available Indoor positioning still faces lots of fundamental technical problems although it has been widely applied. A novel indoor positioning technology by using the smart phone with the assisting of the widely available and economically signals of WiFi is proposed. It also includes the principles and characteristics in indoor positioning. Firstly, improve the system's accuracy by fusing the WiFi fingerprinting positioning and PDR (ped estrian dead reckoning positioning with UKF (unscented Kalman filter. Secondly, improve the real-time performance by clustering the WiFi fingerprinting with k-means clustering algorithm. An investigation test was conducted at the indoor environment to learn about its performance on a HUAWEI P6-U06 smart phone. The result shows that compared to the pattern-matching system without clustering, an average reduction of 51% in the time cost can be obtained without degrading the positioning accuracy. When the state of personnel is walking, the average positioning error of WiFi is 7.76 m, the average positioning error of PDR is 4.57 m. After UKF fusing, the system's average positioning error is down to 1.24 m. It shows that the algorithm greatly improves the system's real-time and positioning accuracy.

  13. Technical assessment of forest road network using Backmund and surface distribution algorithm in a hardwood forest of Hyrcanian zone

    Energy Technology Data Exchange (ETDEWEB)

    Parsakhoo, P.

    2016-07-01

    Aim of study: Corrected Backmund and Surface Distribution Algorithms (SDA) for analysis of forest road network are introduced and presented in this study. Research was carried out to compare road network performance between two districts in a hardwood forest. Area of study: Shast Kalateh forests, Iran. Materials and methods: In uncorrected Backmund algorithm, skidding distance was determined by calculating road density and spacing and then it was designed as Potential Area for Skidding Operations (PASO) in ArcGIS software. To correct this procedure, the skidding constraint areas were taken using GPS and then removed from PASO. In SDA, shortest perpendicular distance from geometrical center of timber compartments to road was measured at both districts. Main results: In corrected Backmund, forest openness in district I and II were 70.3% and 69.5%, respectively. Therefore, there was little difference in forest openness in the districts based on the uncorrected Backmund. In SDA, the mean distance from geometrical center of timber compartments to the roads of districts I and II were 199.45 and 149.31 meters, respectively. Forest road network distribution in district II was better than that of district I relating to SDA. Research highlights: It was concluded that uncorrected Backmund was not precise enough to assess forest road network, while corrected Backmund could exhibit a real PASO by removing skidding constraints. According to presented algorithms, forest road network performance in district II was better than district I. (Author)

  14. Technical assessment of forest road network using Backmund and surface distribution algorithm in a hardwood forest of Hyrcanian zone

    Directory of Open Access Journals (Sweden)

    Aidin Parsakhoo

    2016-07-01

    Full Text Available Aim of study: Corrected Backmund and Surface Distribution Algorithms (SDA for analysis of forest road network are introduced and presented in this study. Research was carried out to compare road network performance between two districts in a hardwood forest. Area of study: Shast Kalateh forests, Iran. Materials and methods: In uncorrected Backmund algorithm, skidding distance was determined by calculating road density and spacing and then it was designed as Potential Area for Skidding Operations (PASO in ArcGIS software. To correct this procedure, the skidding constraint areas were taken using GPS and then removed from PASO. In SDA, shortest perpendicular distance from geometrical center of timber compartments to road was measured at both districts. Main results: In corrected Backmund, forest openness in district I and II were 70.3% and 69.5%, respectively. Therefore, there was little difference in forest openness in the districts based on the uncorrected Backmund. In SDA, the mean distance from geometrical center of timber compartments to the roads of districts I and II were 199.45 and 149.31 meters, respectively. Forest road network distribution in district II was better than that of district I relating to SDA. Research highlights: It was concluded that uncorrected Backmund was not precise enough to assess forest road network, while corrected Backmund could exhibit a real PASO by removing skidding constraints. According to presented algorithms, forest road network performance in district II was better than district I.

  15. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis

    Science.gov (United States)

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-01

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ -function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  16. An Improved Response Surface Methodology Algorithm with an Application to Traffic Signal Optimization for Urban Networks

    Science.gov (United States)

    1995-01-01

    Prepared ca. 1995. This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) va...

  17. Land Surface Temperature and Emissivity Separation from Cross-Track Infrared Sounder Data with Atmospheric Reanalysis Data and ISSTES Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-Ze Zhang

    2017-01-01

    Full Text Available The Cross-track Infrared Sounder (CrIS is one of the most advanced hyperspectral instruments and has been used for various atmospheric applications such as atmospheric retrievals and weather forecast modeling. However, because of the specific design purpose of CrIS, little attention has been paid to retrieving land surface parameters from CrIS data. To take full advantage of the rich spectral information in CrIS data to improve the land surface retrievals, particularly the acquisition of a continuous Land Surface Emissivity (LSE spectrum, this paper attempts to simultaneously retrieve a continuous LSE spectrum and the Land Surface Temperature (LST from CrIS data with the atmospheric reanalysis data and the Iterative Spectrally Smooth Temperature and Emissivity Separation (ISSTES algorithm. The results show that the accuracy of the retrieved LSEs and LST is comparable with the current land products. The overall differences of the LST and LSE retrievals are approximately 1.3 K and 1.48%, respectively. However, the LSEs in our study can be provided as a continuum spectrum instead of the single-channel values in traditional products. The retrieved LST and LSEs now can be better used to further analyze the surface properties or improve the retrieval of atmospheric parameters.

  18. Continuous measurements of water surface height and width along a 6.5km river reach for discharge algorithm development

    Science.gov (United States)

    Tuozzolo, S.; Durand, M. T.; Pavelsky, T.; Pentecost, J.

    2015-12-01

    The upcoming Surface Water and Ocean Topography (SWOT) satellite will provide measurements of river width and water surface elevation and slope along continuous swaths of world rivers. Understanding water surface slope and width dynamics in river reaches is important for both developing and validating discharge algorithms to be used on future SWOT data. We collected water surface elevation and river width data along a 6.5km stretch of the Olentangy River in Columbus, Ohio from October to December 2014. Continuous measurements of water surface height were supplemented with periodical river width measurements at twenty sites along the study reach. The water surface slope of the entire reach ranged from during 41.58 cm/km at baseflow to 45.31 cm/km after a storm event. The study reach was also broken into sub-reaches roughly 1km in length to study smaller scale slope dynamics. The furthest upstream sub-reaches are characterized by free-flowing riffle-pool sequences, while the furthest downstream sub-reaches were directly affected by two low-head dams. In the sub-reaches immediately upstream of each dam, baseflow slope is as low as 2 cm/km, while the furthest upstream free-flowing sub-reach has a baseflow slope of 100 cm/km. During high flow events the backwater effect of the dams was observed to propagate upstream: sub-reaches impounded by the dams had increased water surface slopes, while free flowing sub-reaches had decreased water surface slopes. During the largest observed flow event, a stage change of 0.40 m affected sub-reach slopes by as much as 30 cm/km. Further analysis will examine height-width relationships within the study reach and relate cross-sectional flow area to river stage. These relationships can be used in conjunction with slope data to estimate discharge using a modified Manning's equation, and are a core component of discharge algorithms being developed for the SWOT mission.

  19. Surface characteristics of bioactive Ti fabricated by chemical treatment for cartilaginous-integration.

    Science.gov (United States)

    Miyajima, Hiroyuki; Ozer, Fusun; Imazato, Satoshi; Mante, Francis K

    2017-09-01

    Artificial hip joints are generally expected to fail due to wear after approximately 15years and then have to be replaced by revision surgery. If articular cartilage can be integrated onto the articular surfaces of artificial joints in the same way as osseo-integration of titanium dental implants, the wear of joint implants may be reduced or prevented. However, very few studies have focused on the relationship between Ti surface and cartilage. To explore the possibility of cartilaginous-integration, we fabricated chemically treated Ti surfaces with H 2 O 2 /HCl, collagen type II and SBF, respectively. Then, we evaluated surface characteristics of the prepared Ti samples and assessed the cartilage formation by culturing chondrocytes on the Ti samples. When oxidized Ti was immersed in SBF for 7days, apatite was formed on the Ti surface. The surface characteristics of Ti indicated that the wettability was increased by all chemical treatments compared to untreated Ti, and that H 2 O 2 /HCl treated surface had significantly higher roughness compared to the other three groups. Chondrocytes produced significantly more cartilage matrix on all chemically treated Ti surfaces compared to untreated Ti. Thus, to realize cartilaginous-integration and to prevent wear of the implants in joints, application of bioactive Ti formed by chemical treatment would be a promising and effective strategy to improve durability of joint replacement. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Investigation of Selected Surface Integrity Features of Duplex Stainless Steel (DSS) after Turning

    Czech Academy of Sciences Publication Activity Database

    Krolczyk, G.; Nieslony, P.; Legutko, S.; Hloch, Sergej; Samardžić, I.

    2015-01-01

    Roč. 54, č. 1 (2015), s. 91-94 ISSN 0543-5846 Institutional support: RVO:68145535 Keywords : duplex stainless steel * machining * turning * surface integrity * surface roughness Subject RIV: JQ - Machines ; Tools Impact factor: 0.959, year: 2014 http://hrcak.srce.hr/126702

  1. Assessment of Polarization Effect on Efficiency of Levenberg-Marquardt Algorithm in Case of Thin Atmosphere over Black Surface

    Science.gov (United States)

    Korkin, S.; Lyapustin, A.

    2012-12-01

    The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD

  2. Generating Alternative Engineering Designs by Integrating Desktop VR with Genetic Algorithms

    Science.gov (United States)

    Chandramouli, Magesh; Bertoline, Gary; Connolly, Patrick

    2009-01-01

    This study proposes an innovative solution to the problem of multiobjective engineering design optimization by integrating desktop VR with genetic computing. Although, this study considers the case of construction design as an example to illustrate the framework, this method can very much be extended to other engineering design problems as well.…

  3. Correlation functions of integrable models: A description of the ABACUS algorithm

    NARCIS (Netherlands)

    Caux, J.S.

    2009-01-01

    Recent developments in the theory of integrable models have provided the means of calculating dynamical correlation functions of some important observables in systems such as Heisenberg spin chains and one-dimensional atomic gases. This article explicitly describes how such calculations are

  4. A memetic algorithm for bi-objective integrated forward/reverse logistics network design

    NARCIS (Netherlands)

    Pishvaee, Mir Saman; Farahani, Reza Zanjirani; Dullaert, Wout

    Logistics network design is a major strategic issue due to its impact on the efficiency and responsiveness of the supply chain. This paper proposes a model for integrated logistics network design to avoid the sub-optimality caused by a separate, sequential design of forward and reverse logistics

  5. An unit cost adjusting heuristic algorithm for the integrated planning and scheduling of a two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-10-01

    Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great

  6. Integral methods for shallow free-surface flows with separation

    DEFF Research Database (Denmark)

    Watanabe, S.; Putkaradze, V.; Bohr, Tomas

    2003-01-01

    eddy and separated flow. Assuming a variable radial velocity profile as in Karman-Pohlhausen's method, we obtain a system of two ordinary differential equations for stationary states that can smoothly go through the jump. Solutions of the system are in good agreement with experiments. For the flow down...... an inclined plane we take a similar approach and derive a simple model in which the velocity profile is not restricted to a parabolic or self-similar form. Two types of solutions with large surface distortions are found: solitary, kink-like propagating fronts, obtained when the flow rate is suddenly changed......, and stationary jumps, obtained, for instance, behind a sluice gate. We then include time dependence in the model to study the stability of these waves. This allows us to distinguish between sub- and supercritical flows by calculating dispersion relations for wavelengths of the order of the width of the layer....

  7. The retrieval of aerosol over land surface from GF-1 16m camera with Deep Blue algorithm

    Science.gov (United States)

    Zhu, Wende; Zheng, Taihao; Zhang, Luo; Wang, Lei; Cai, Kun

    2017-12-01

    As Chinese new satellite of high resolution for earth observation, the application of air pollution monitoring for GF-1 data is a key problem which need to be solved. In the paper, based on Hsu et al (2004) Deep Blue algorithm, by taking count of the characteristic of GF-1 16m camera, the contribution of land surface was removed by MODIS surface reflectance product, and aerosol optical depth (AOD) was retrieved from apparent reflectance in blue band. So, Deep Blue algorithm was applied to GF-1 16m camera successfully. Then, we collected GF-1 16m camera data over Beijing area between August and November, 2014, and the experiment of AOD was processed. It is obvious that the retrieved image showed the distribution of AOD well. At last, the AOD was validated by ground-based AOD data of AERONET/PHOTONS Beijing site. It is showed that there are good agreement between GF-1 AOD and AERONET/PHOTONS AOD, (R>0.7). But GF-1 AOD is obviously larger than ground-based AOD which may be brought by the difference of filter response function between MODIS and GF-1 camera.

  8. Grid Integration of PV Power based on PHIL testing using different Interface Algorithms

    DEFF Research Database (Denmark)

    Craciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso

    2013-01-01

    Photovoltaic (PV) power among all renewable energies had the most accelerated growth rate in terms of installed capacity in recent years. Transmission System Operators (TSOs) changed their perspective about PV power and started to include it into their planning and operation, imposing PV systems...... to be more active in grid support. Therefore, a better understanding and detailed analysis of the PV systems interaction with the grid is needed; hence power hardware in the loop (PHIL) testing involving PV power became an interesting subject to look into. To test PV systems for grid code (GC) compliance...... and supply of ancillary services, first the grid has to be simulated using PHIL, but in order to achieve it, different interface algorithms (IA) had to be evaluated in terms of system stability and signal accuracy....

  9. Congestion Control Algorithm in Distribution Feeders: Integration in a Distribution Management System

    Directory of Open Access Journals (Sweden)

    Tine L. Vandoorn

    2015-06-01

    Full Text Available The increasing share of distributed energy resources poses a challenge to the distribution network operator (DNO to maintain the current availability of the system while limiting the investment costs. Related to this, there is a clear trend in DNOs trying to better monitor their grid by installing a distribution management system (DMS. This DMS enables the DNOs to remotely switch their network or better localize and solve faults. Moreover, the DMS can be used to centrally control the grid assets. Therefore, in this paper, a control strategy is discussed that can be implemented in the DMS for solving current congestion problems posed by the increasing share of renewables in the grid. This control strategy controls wind turbines in order to avoid congestion while mitigating the required investment costs in order to achieve a global cost-efficient solution. Next to the application and objective of the control, the parameter tuning of the control algorithm is discussed.

  10. Framework for Integrating Science Data Processing Algorithms Into Process Control Systems

    Science.gov (United States)

    Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.

    2011-01-01

    A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.

  11. On the initial condition problem of the time domain PMCHWT surface integral equation

    KAUST Repository

    Uysal, Ismail Enes

    2017-05-13

    Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced properly. This problem can be remedied by solving the time integral of the surface integral for auxiliary currents that are defined to be the time derivatives of the equivalent currents. Then the equivalent currents are obtained by numerically differentiating the auxiliary ones. In this work, this approach is applied to the marching on-in-time solution of the time domain Poggio-Miller-Chan-Harrington-Wu-Tsai surface integral equation enforced on dispersive/plasmonic scatterers. Accuracy of the proposed method is demonstrated by a numerical example.

  12. Predicting tooth surface loss using genetic algorithms-optimized artificial neural networks.

    Science.gov (United States)

    Al Haidan, Ali; Abu-Hammad, Osama; Dar-Odeh, Najla

    2014-01-01

    Our aim was to predict tooth surface loss in individuals without the need to conduct clinical examinations. Artificial neural networks (ANNs) were used to construct a mathematical model. Input data consisted of age, smoker status, type of tooth brush, brushing, and consumption of pickled food, fizzy drinks, orange, apple, lemon, and dried seeds. Output data were the sum of tooth surface loss scores for selected teeth. The optimized constructed ANN consisted of 2-layer network with 15 neurons in the first layer and one neuron in the second layer. The data of 46 subjects were used to build the model, while the data of 15 subjects were used to test the model. Accepting an error of ±5 scores for all chosen teeth, the accuracy of the network becomes more than 80%. In conclusion, this study shows that modeling tooth surface loss using ANNs is possible and can be achieved with a high degree of accuracy.

  13. Predicting Tooth Surface Loss Using Genetic Algorithms-Optimized Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ali Al Haidan

    2014-01-01

    Full Text Available Our aim was to predict tooth surface loss in individuals without the need to conduct clinical examinations. Artificial neural networks (ANNs were used to construct a mathematical model. Input data consisted of age, smoker status, type of tooth brush, brushing, and consumption of pickled food, fizzy drinks, orange, apple, lemon, and dried seeds. Output data were the sum of tooth surface loss scores for selected teeth. The optimized constructed ANN consisted of 2-layer network with 15 neurons in the first layer and one neuron in the second layer. The data of 46 subjects were used to build the model, while the data of 15 subjects were used to test the model. Accepting an error of ±5 scores for all chosen teeth, the accuracy of the network becomes more than 80%. In conclusion, this study shows that modeling tooth surface loss using ANNs is possible and can be achieved with a high degree of accuracy.

  14. Experimental study of surface integrity and fatigue life in the face milling of inconel 718

    Science.gov (United States)

    Wang, Xiangyu; Huang, Chuanzhen; Zou, Bin; Liu, Guoliang; Zhu, Hongtao; Wang, Jun

    2017-12-01

    The Inconel 718 alloy is widely used in the aerospace and power industries. The machining-induced surface integrity and fatigue life of this material are important factors for consideration due to high reliability and safety requirements. In this work, the milling of Inconel 718 was conducted at different cutting speeds and feed rates. Surface integrity and fatigue life were measured directly. The effects of cutting speed and feed rate on surface integrity and their further influences on fatigue life were analyzed. Within the chosen parameter range, the cutting speed barely affected the surface roughness, whereas the feed rate increased the surface roughness through the ideal residual height. The surface hardness increased as the cutting speed and feed rate increased. Tensile residual stress was observed on the machined surface, which showed improvement with the increasing feed rate. The cutting speed was not an influencing factor on fatigue life, but the feed rate affected fatigue life through the surface roughness. The high surface roughness resulting from the high feed rate could result in a high stress concentration factor and lead to a low fatigue life.

  15. An Algorithm for Surface Current Retrieval from X-band Marine Radar Images

    Directory of Open Access Journals (Sweden)

    Chengxi Shen

    2015-06-01

    Full Text Available In this paper, a novel current inversion algorithm from X-band marine radar images is proposed. The routine, for which deep water is assumed, begins with 3-D FFT of the radar image sequence, followed by the extraction of the dispersion shell from the 3-D image spectrum. Next, the dispersion shell is converted to a polar current shell (PCS using a polar coordinate transformation. After removing outliers along each radial direction of the PCS, a robust sinusoidal curve fitting is applied to the data points along each circumferential direction of the PCS. The angle corresponding to the maximum of the estimated sinusoid function is determined to be the current direction, and the amplitude of this sinusoidal function is the current speed. For validation, the algorithm is tested against both simulated radar images and field data collected by a vertically-polarized X-band system and ground-truthed with measurements from an acoustic Doppler current profiler (ADCP. From the field data, it is observed that when the current speed is less than 0.5 m/s, the root mean square differences between the radar-derived and the ADCP-measured current speed and direction are 7.3 cm/s and 32.7°, respectively. The results indicate that the proposed procedure, unlike most existing current inversion schemes, is not susceptible to high current speeds and circumvents the need to consider aliasing. Meanwhile, the relatively low computational cost makes it an excellent choice in practical marine applications.

  16. Correlation functions of integrable models: A description of the ABACUS algorithm

    Science.gov (United States)

    Caux, Jean-Sébastien

    2009-09-01

    Recent developments in the theory of integrable models have provided the means of calculating dynamical correlation functions of some important observables in systems such as Heisenberg spin chains and one-dimensional atomic gases. This article explicitly describes how such calculations are generally implemented in the ABACUS C++ library, emphasizing the universality in treatment of different cases coming as a consequence of unifying features within the Bethe ansatz.

  17. Models and algorithms for the empty container repositioning and its integration with routing problems

    OpenAIRE

    Lai, Michela

    2013-01-01

    The introduction of containers has fostered intermodal freight transportation. A definition of intermodality was provided by the European Commission as “a characteristic of a transport system whereby at least two different modes are used in an integrated manner in order to complete a door-to-door transport sequence”. The intermodal container transportation leads to several benefits, such as higher productivity during handling phases and advantages in terms of security, losses and damage...

  18. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    Science.gov (United States)

    Keerthi Sravan, R; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2017. Published by Elsevier Inc.

  19. Remote sensing algorithm for surface evapotranspiration considering landscape and statistical effects on mixed pixels

    Directory of Open Access Journals (Sweden)

    Z. Q. Peng

    2016-11-01

    Full Text Available Evapotranspiration (ET plays an important role in surface–atmosphere interactions and can be monitored using remote sensing data. However, surface heterogeneity, including the inhomogeneity of landscapes and surface variables, significantly affects the accuracy of ET estimated from satellite data. The objective of this study is to assess and reduce the uncertainties resulting from surface heterogeneity in remotely sensed ET using Chinese HJ-1B satellite data, which is of 30 m spatial resolution in VIS/NIR bands and 300 m spatial resolution in the thermal-infrared (TIR band. A temperature-sharpening and flux aggregation scheme (TSFA was developed to obtain accurate heat fluxes from the HJ-1B satellite data. The IPUS (input parameter upscaling and TRFA (temperature resampling and flux aggregation methods were used to compare with the TSFA in this study. The three methods represent three typical schemes used to handle mixed pixels from the simplest to the most complex. IPUS handles all surface variables at coarse resolution of 300 m in this study, TSFA handles them at 30 m resolution, and TRFA handles them at 30 and 300 m resolution, which depends on the actual spatial resolution. Analyzing and comparing the three methods can help us to get a better understanding of spatial-scale errors in remote sensing of surface heat fluxes. In situ data collected during HiWATER-MUSOEXE (Multi-Scale Observation Experiment on Evapotranspiration over heterogeneous land surfaces of the Heihe Watershed Allied Telemetry Experimental Research were used to validate and analyze the methods. ET estimated by TSFA exhibited the best agreement with in situ observations, and the footprint validation results showed that the R2, MBE, and RMSE values of the sensible heat flux (H were 0.61, 0.90, and 50.99 W m−2, respectively, and those for the latent heat flux (LE were 0.82, −20.54, and 71.24 W m−2, respectively. IPUS yielded the largest errors

  20. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    Science.gov (United States)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of

  1. Integrated Detection and Prediction of Influenza Activity for Real-Time Surveillance: Algorithm Design.

    Science.gov (United States)

    Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas

    2017-06-15

    Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning

  2. A UWB/Improved PDR Integration Algorithm Applied to Dynamic Indoor Positioning for Pedestrians.

    Science.gov (United States)

    Chen, Pengzhan; Kuang, Ye; Chen, Xiaoyue

    2017-09-08

    Inertial sensors are widely used in various applications, such as human motion monitoring and pedestrian positioning. However, inertial sensors cannot accurately define the process of human movement, a limitation that causes data drift in the process of human body positioning, thus seriously affecting positioning accuracy and stability. The traditional pedestrian dead-reckoning algorithm, which is based on a single inertial measurement unit, can suppress the data drift, but fails to accurately calculate the number of walking steps and heading value, thus it cannot meet the application requirements. This study proposes an indoor dynamic positioning method with an error self-correcting function based on the symmetrical characteristics of human motion to obtain the definition basis of human motion process quickly and to solve the abovementioned problems. On the basis of this proposed method, an ultra-wide band (UWB) method is introduced. An unscented Kalman filter is applied to fuse inertial sensors and UWB data, inertial positioning is applied to compensation for the defects of susceptibility to UWB signal obstacles, and UWB positioning is used to overcome the error accumulation of inertial positioning. The above method can improve both the positioning accuracy and the response of the positioning results. Finally, this study designs an indoor positioning test system to test the static and dynamic performances of the proposed indoor positioning method. Results show that the positioning system both has high accuracy and good real-time performance.

  3. Development of thunderstorm monitoring technologies and algorithms by integration of radar, sensors, and satellite images

    Science.gov (United States)

    Adzhieva, Aida A.; Shapovalov, Vitaliy A.; Boldyreff, Anton S.

    2017-10-01

    In the context of rising the frequency of natural disasters and catastrophes humanity has to develop methods and tools to ensure safe living conditions. Effectiveness of preventive measures greatly depends on quality and lead time of the forecast of disastrous natural phenomena, which is based on the amount of knowledge about natural hazards, their causes, manifestations, and impact. To prevent them it is necessary to get complete and comprehensive information about the extent of spread and severity of natural processes that can act within a defined territory. For these purposes the High Mountain Geophysical Institute developed the automated workplace for mining, analysis and archiving of radar, satellite, lightning sensors information and terrestrial (automatic weather station) weather data. The combination and aggregation of data from different sources of meteorological data provides a more informativity of the system. Satellite data shows the global cloud region in visible and infrared ranges, but have an uncertainty in terms of weather events and large time interval between the two periods of measurements, which complicates the use of this information for very short range forecasts of weather phenomena. Radar and lightning sensors data provide the detection of weather phenomena and their localization on the background of the global pattern of cloudiness in the region and have a low period measurement of atmospheric phenomena (hail, thunderstorms, showers, squalls, tornadoes). The authors have developed the improved algorithms for recognition of dangerous weather phenomena, based on the complex analysis of incoming information using the mathematical apparatus of pattern recognition.

  4. Evaluation of Monte Carlo electron-Transport algorithms in the integrated Tiger series codes for stochastic-media simulations

    International Nuclear Information System (INIS)

    Franke, B.C.; Kensek, R.P.; Prinja, A.K.

    2013-01-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative 'condensed transport' formulation, a Generalized Boltzmann-Fokker-Planck (GBFP) method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations. (authors)

  5. An iterative algorithm for computing aeroacoustic integrals with application to the analysis of free shear flow noise.

    Science.gov (United States)

    Margnat, Florent; Fortuné, Véronique

    2010-10-01

    An iterative algorithm is developed for the computation of aeroacoustic integrals in the time domain. It is specially designed for the generation of acoustic images, thus giving access to the wavefront pattern radiated by an unsteady flow when large size source fields are considered. It is based on an iterative selection of source-observer pairs involved in the radiation process at a given time-step. It is written as an advanced-time approach, allowing easy connection with flow simulation tools. Its efficiency is related to the fraction of an observer grid step that a sound-wave covers during one time step. Test computations were performed, showing the CPU-time to be 30 to 50 times smaller than with a classical non-iterative procedure. The algorithm is applied to compute the sound radiated by a spatially evolving mixing-layer flow: it is used to compute and visualize contributions to the acoustic field from the different terms obtained by a decomposition of the Lighthill source term.

  6. Pseudocode Interpreter (Pseudocode Integrated Development Environment with Lexical Analyzer and Syntax Analyzer using Recursive Descent Parsing Algorithm

    Directory of Open Access Journals (Sweden)

    Christian Lester D. Gimeno

    2017-11-01

    Full Text Available –This research study focused on the development of a software that helps students design, write, validate and run their pseudocode in a semi Integrated Development Environment (IDE instead of manually writing it on a piece of paper.Specifically, the study aimed to develop lexical analyzer or lexer, syntax analyzer or parser using recursive descent parsing algorithm and an interpreter. The lexical analyzer reads pseudocodesource in a sequence of symbols or characters as lexemes.The lexemes are then analyzed by the lexer that matches a pattern for valid tokens and passes to the syntax analyzer or parser. The syntax analyzer or parser takes those valid tokens and builds meaningful commands using recursive descent parsing algorithm in a form of an abstract syntax tree. The generation of an abstract syntax tree is based on the specified grammar rule created by the researcher expressed in Extended Backus-Naur Form. The Interpreter takes the generated abstract syntax tree and starts the evaluation or interpretation to produce pseudocode output. The software was evaluated using white-box testing by several ICT professionals and black-box testing by several computer science students based on the International Organization for Standardization (ISO 9126 software quality standards. The overall results of the evaluation both for white-box and black-box were described as “Excellent in terms of functionality, reliability, usability, efficiency, maintainability and portability”.

  7. Process modeling and optimization of industrial ethylene oxide reactor by integrating support vector regression and genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kumar Lahiri, S.; Khalfe, N. [National Inst. of Technology, Durgapur, West Bengal (India). Dept. of Chemical Engineering

    2009-02-15

    Process modeling and optimization strategies that integrate support vector regression (SVR) with differential evolution were used to model and optimize the commercial catalytic process for ethylene oxide (EO). EO is produced commercially in a shell and tube type EO reactor by reacting oxygen and ethylene at high temperature and pressure in the presence of a silver-based catalyst. The oxidation of ethylene involves a main reaction producing EO and an undesirable side reaction producing carbon dioxide. In this study, a process model was developed using an SVR method and genetic algorithms (GAs) that maximize the process performance. The optimized solutions, when verified in an actual industrial plant, resulted in a significant improvement in the EO production rate and catalyst selectivity. In the SVR-GA approach, an SVR model was constructed for correlating process data comprising values of operating and performance variables. Next, model inputs describing process operating variables were optimized using GAs to maximize the process performance. The GA has some unique advantages over the commonly used gradient-based deterministic optimization algorithms. The major advantage of the SVR-GA strategy is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of reaction mechanism or kinetics is not required. 14 refs., 5 tabs., 7 figs.

  8. Software for Manipulating and Embedding Data Interrogation Algorithms into Integrated Systems: Special Application to Structural Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Allen, David W. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States)

    2004-12-01

    In this study a software package for easily creating and embedding structural health monitoring (SHM) data interrogation processes in remote hardware is presented. The software described herein is comprised of two pieces. The first is a client to allow graphical construction of data interrogation processes. The second is node software for remote execution of processes on remote sensing and monitoring hardware. The client software is created around a catalog of data interrogation algorithms compiled over several years of research at Los Alamos National Laboratory known as DIAMOND II. This study also includes encapsulating the DIAMOND II algorithms into independent interchangeable functions and expanding the catalog with work in feature extraction and statistical discrimination. The client software also includes methods for interfacing with the node software over an Internet connection. Once connected, the client software can upload a developed process to the integrated sensing and processing node. The node software has the ability to run the processes and return results. This software creates a distributed SHM network without individual nodes relying on each other or a centralized server to monitor a structure.

  9. Robust and rapid algorithms facilitate large-scale whole genome sequencing downstream analysis in an integrative framework.

    Science.gov (United States)

    Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung

    2017-05-19

    Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Determination of dissipative Dyakonov surface waves using a finite element method based eigenvalue algorithm.

    Science.gov (United States)

    Shih, Pi-Kuei; Hsiao, Hui-Hsin; Chang, Hung-Chun

    2017-11-27

    A full-vectorial finite element method is developed to analyze the surface waves propagating at the interface between two media which could be dissipative particularly. The dissipative wave possessing a complex-valued propagation constant can be determined precisely for any given propagation direction and thus the property of losses could be thoroughly analyzed. Besides, by applying a special characteristic of the implicit circular block matrix, we reduce the computational consumptions in the analysis. By utilizing this method, the Dyakonov surface wave (DSW) at the interface between a dielectric and a metal-dielectric multilayered (MDM) structure which serves as a hyperbolic medium is discussed. Its propagation loss is smaller for larger period of the MDM structure but its field becomes less confined to the interface.

  11. Potential shallow aquifers characterization through an integrated geophysical method: multivariate approach by means of k-means algorithms

    Directory of Open Access Journals (Sweden)

    Stefano Bernardinetti

    2017-06-01

    Full Text Available The need to obtain a detailed hydrogeological characterization of the subsurface and its interpretation for the groundwater resources management, often requires to apply several and complementary geophysical methods. The goal of the approach in this paper is to provide a unique model of the aquifer by synthesizing and optimizing the information provided by several geophysical methods. This approach greatly reduces the degree of uncertainty and subjectivity of the interpretation by exploiting the different physical and mechanic characteristics of the aquifer. The studied area, into the municipality of Laterina (Arezzo, Italy, is a shallow basin filled by lacustrine and alluvial deposits (Pleistocene and Olocene epochs, Quaternary period, with alternated silt, sand with variable content of gravel and clay where the bottom is represented by arenaceous-pelitic rocks (Mt. Cervarola Unit, Tuscan Domain, Miocene epoch. This shallow basin constitutes the unconfined superficial aquifer to be exploited in the nearly future. To improve the geological model obtained from a detailed geological survey we performed electrical resistivity and P wave refraction tomographies along the same line in order to obtain different, independent and integrable data sets. For the seismic data also the reflected events have been processed, a remarkable contribution to draw the geologic setting. Through the k-means algorithm, we perform a cluster analysis for the bivariate data set to individuate relationships between the two sets of variables. This algorithm allows to individuate clusters with the aim of minimizing the dissimilarity within each cluster and maximizing it among different clusters of the bivariate data set. The optimal number of clusters “K”, corresponding to the individuated geophysical facies, depends to the multivariate data set distribution and in this work is estimated with the Silhouettes. The result is an integrated tomography that shows a finite

  12. On finding bicliques in bipartite graphs: a novel algorithm and its application to the integration of diverse biological data types

    Science.gov (United States)

    2014-01-01

    Background Integrating and analyzing heterogeneous genome-scale data is a huge algorithmic challenge for modern systems biology. Bipartite graphs can be useful for representing relationships across pairs of disparate data types, with the interpretation of these relationships accomplished through an enumeration of maximal bicliques. Most previously-known techniques are generally ill-suited to this foundational task, because they are relatively inefficient and without effective scaling. In this paper, a powerful new algorithm is described that produces all maximal bicliques in a bipartite graph. Unlike most previous approaches, the new method neither places undue restrictions on its input nor inflates the problem size. Efficiency is achieved through an innovative exploitation of bipartite graph structure, and through computational reductions that rapidly eliminate non-maximal candidates from the search space. An iterative selection of vertices for consideration based on non-decreasing common neighborhood sizes boosts efficiency and leads to more balanced recursion trees. Results The new technique is implemented and compared to previously published approaches from graph theory and data mining. Formal time and space bounds are derived. Experiments are performed on both random graphs and graphs constructed from functional genomics data. It is shown that the new method substantially outperforms the best previous alternatives. Conclusions The new method is streamlined, efficient, and particularly well-suited to the study of huge and diverse biological data. A robust implementation has been incorporated into GeneWeaver, an online tool for integrating and analyzing functional genomics experiments, available at http://geneweaver.org. The enormous increase in scalability it provides empowers users to study complex and previously unassailable gene-set associations between genes and their biological functions in a hierarchical fashion and on a genome-wide scale. This practical

  13. A constrained reduced-dimensionality search algorithm to follow chemical reactions on potential energy surfaces

    Science.gov (United States)

    Lankau, Timm; Yu, Chin-Hui

    2013-06-01

    A constrained reduced-dimensionality algorithm can be used to efficiently locate transition states and products in reactions involving conformational changes. The search path (SP) is constructed stepwise from linear combinations of a small set of manually chosen internal coordinates, namely the predictors. The majority of the internal coordinates, the correctors, are optimized at every step of the SP to minimize the total energy of the system so that the path becomes a minimum energy path connecting products and transition states with the reactants. Problems arise when the set of predictors needs to include weak coordinates, for example, dihedral angles, as well as strong ones such as bond distances. Two principal constraining methods for the weak coordinates are proposed to mend this situation: static and dynamic constraints. Dynamic constraints are automatically activated and revoked depending on the state of the weak coordinates among the predictors, while static ones require preset control factors and act permanently. All these methods enable the successful application (4 reactions are presented involving cyclohexane, alanine dipeptide, trimethylsulfonium chloride, and azafulvene) of the reduced dimensionality method to reactions where the reaction path covers large conformational changes in addition to the formation/breaking of chemical bonds. Dynamic constraints are found to be the most efficient method as they require neither additional information about the geometry of the transition state nor fine tuning of control parameters.

  14. Adaptive Hybrid Fuzzy-Proportional Plus Crisp-Integral Current Control Algorithm for Shunt Active Power Filter Operation

    Directory of Open Access Journals (Sweden)

    Nor Farahaida Abdul Rahman

    2016-09-01

    Full Text Available An adaptive hybrid fuzzy-proportional plus crisp-integral current control algorithm (CCA for regulating supply current and enhancing the operation of a shunt active power filter (SAPF is presented. It introduces a unique integration of fuzzy-proportional (Fuzzy-P and crisp-integral (Crisp-I current controllers. The Fuzzy-P current controller is developed to perform gain tuning procedure and proportional control action. This controller inherits the simplest configuration; it is constructed using a single-input single-output fuzzy rule configuration. Thus, an execution of few fuzzy rules is sufficient for the controller’s operation. Furthermore, the fuzzy rule is developed using the relationship of currents only. Hence, it simplifies the controller development. Meanwhile, the Crisp-I current controller is developed to perform integral control action using a controllable gain value; to improve the steady-state control mechanism. The gain value is modified and controlled using the Fuzzy-P current controller’s output variable. Therefore, the gain value will continuously be adjusted at every sample period (or throughout the SAPF operation. The effectiveness of the proposed CCA in regulating supply current is validated in both simulation and experimental work. All results have proven that the SAPF using the proposed CCA is capable to regulate supply current during steady-state and dynamic-state operations. At the same time, the SAPF is able to enhance its operation in compensating harmonic currents and reactive power. Furthermore, the implementation of the proposed CCA has resulted more stable dc-link voltage waveform.

  15. Data from an integrative approach decipher the surface proteome of Propionibacterium freudenreichii

    Directory of Open Access Journals (Sweden)

    Caroline Le Maréchal

    2014-12-01

    Full Text Available The surface proteins of the probiotic Propionibacterium freudenreichii were inventoried by an integrative approach that combines in silico protein localization prediction, surface protein extraction, shaving and fluorescent CyDye labeling. Proteins that were extracted and/or shaved and/or labeled were identified by nano-LC–MS/MS following trypsinolysis. This method’s combination allowed to confirm detection of true surface proteins involved in host/probiotic interactions. The data, supplied in this article, are related to the research article entitled “Surface proteins of P. freudenreichii are involved in its anti-inflammatory properties” (Le Maréchal et al., 2014 [6].

  16. Algorithmic and consultative integration of transfusion medicine and coagulation: a personalized medicine approach with reduced blood component utilization.

    Science.gov (United States)

    Brown, Robert E; Dorion, R Patrick; Trowbridge, Cody; Stammers, Alfred H; Fitt, Walter; Davis, Jerry

    2011-01-01

    , early implementation and following full implementation of this initiative, revealed a decline in the number of units of FFP, cryoprecipitate and single donor (apheresis) platelets administered. We report on the successful development of a model - based on the algorithmic and consultative integration of transfusion medicine and coagulation - that customizes blood component, derivative, and recombinant therapies appropriate for an individual patient's need, resulting in targeted transfusion therapy and associated with reduced blood component utilization.

  17. Solvent-assisted multistage nonequilibrium electron transfer in rigid supramolecular systems: Diabatic free energy surfaces and algorithms for numerical simulations

    Science.gov (United States)

    Feskov, Serguei V.; Ivanov, Anatoly I.

    2018-03-01

    An approach to the construction of diabatic free energy surfaces (FESs) for ultrafast electron transfer (ET) in a supramolecule with an arbitrary number of electron localization centers (redox sites) is developed, supposing that the reorganization energies for the charge transfers and shifts between all these centers are known. Dimensionality of the coordinate space required for the description of multistage ET in this supramolecular system is shown to be equal to N - 1, where N is the number of the molecular centers involved in the reaction. The proposed algorithm of FES construction employs metric properties of the coordinate space, namely, relation between the solvent reorganization energy and the distance between the two FES minima. In this space, the ET reaction coordinate zn n' associated with electron transfer between the nth and n'th centers is calculated through the projection to the direction, connecting the FES minima. The energy-gap reaction coordinates zn n' corresponding to different ET processes are not in general orthogonal so that ET between two molecular centers can create nonequilibrium distribution, not only along its own reaction coordinate but along other reaction coordinates too. This results in the influence of the preceding ET steps on the kinetics of the ensuing ET. It is important for the ensuing reaction to be ultrafast to proceed in parallel with relaxation along the ET reaction coordinates. Efficient algorithms for numerical simulation of multistage ET within the stochastic point-transition model are developed. The algorithms are based on the Brownian simulation technique with the recrossing-event detection procedure. The main advantages of the numerical method are (i) its computational complexity is linear with respect to the number of electronic states involved and (ii) calculations can be naturally parallelized up to the level of individual trajectories. The efficiency of the proposed approach is demonstrated for a model

  18. Accelerated sampling by infinite swapping of path integral molecular dynamics with surface hopping

    Science.gov (United States)

    Lu, Jianfeng; Zhou, Zhennan

    2018-02-01

    To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.

  19. A Semiautomated Multilayer Picking Algorithm for Ice-sheet Radar Echograms Applied to Ground-Based Near-Surface Data

    Science.gov (United States)

    Onana, Vincent De Paul; Koenig, Lora Suzanne; Ruth, Julia; Studinger, Michael; Harbeck, Jeremy P.

    2014-01-01

    Snow accumulation over an ice sheet is the sole mass input, making it a primary measurement for understanding the past, present, and future mass balance. Near-surface frequency-modulated continuous-wave (FMCW) radars image isochronous firn layers recording accumulation histories. The Semiautomated Multilayer Picking Algorithm (SAMPA) was designed and developed to trace annual accumulation layers in polar firn from both airborne and ground-based radars. The SAMPA algorithm is based on the Radon transform (RT) computed by blocks and angular orientations over a radar echogram. For each echogram's block, the RT maps firn segmented-layer features into peaks, which are picked using amplitude and width threshold parameters of peaks. A backward RT is then computed for each corresponding block, mapping the peaks back into picked segmented-layers. The segmented layers are then connected and smoothed to achieve a final layer pick across the echogram. Once input parameters are trained, SAMPA operates autonomously and can process hundreds of kilometers of radar data picking more than 40 layers. SAMPA final pick results and layer numbering still require a cursory manual adjustment to correct noncontinuous picks, which are likely not annual, and to correct for inconsistency in layer numbering. Despite the manual effort to train and check SAMPA results, it is an efficient tool for picking multiple accumulation layers in polar firn, reducing time over manual digitizing efforts. The trackability of good detected layers is greater than 90%.

  20. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    Science.gov (United States)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  1. The effect of cutting parameters on surface integrity in milling TI6AL4V

    Directory of Open Access Journals (Sweden)

    Oosthuizen, Tiaan

    2016-12-01

    Full Text Available The objective of machining performance is to reduce operational costs and to increase the production rate while maintaining or improving the required surface integrity of the machined component. Together with industrial partners, several benchmark titanium components were selected and machined to achieve this. Titanium alloys are used extensively in several industries due to its unique strength-to-weight ratio and corrosion resistance. Its properties, however, also make it susceptible to surface integrity damage during machining operations. The research objectives of this study were to understand the effect of cutting parameters on surface integrity to ensure that machined components are within the required surface quality tolerances. The effect of cutting speed and feed rate on surface roughness, micro-hardness, and the microstructure of the work piece were studied for milling Ti6Al4V. The surface roughness increased with a greater feed rate and a decrease in cutting speed. The maximum micro-hardness was 23 per cent harder than the bulk material. Plastic deformation and grain rotation below the machined surface were found with the rotation of the grain lines in the direction of feed. There was no evidence of subsurface defects for any of the cutting conditions tested.

  2. Effect of Surface Integrity of Turned GH4169 Superalloy on Fatigue Performance

    Directory of Open Access Journals (Sweden)

    WU Daoxia

    2017-12-01

    Full Text Available Through turning and rotary bending fatigue test, the effect of turning feed on GH4169 superalloy surface integrity, and the effect of surface integrity on fatigue life were studied. The results show that the surface roughness Ra decreases from 1.497 μm to 0.431 μm when the turning feed decreases from 0.2 mm/r to 0.02 mm/r. The surface residual stresses are changed from tensile stress to compressive stress. The depth of plastic deformation layer decreases from 8 μm to 2 μm. The surface stress concentration factor has the most significant effect on the fatigue life of GH4169. With the increase of stress concentration factor, the fatigue life decreases significantly. When f is 0.13 mm/r, the surface stress concentration factor Kst is 1.166; the surface micro-hardness is 405.27HV0.025; the surface residual stress is 82.08MPa; and the average fatigue life is 6.98×104 cycles. The multiple cracks are initiated at the machined surface defects of GH4169 superalloy specimen.

  3. A hybrid method combining the FDTD and a time domain boundary-integral equation marching-on-in-time algorithm

    Science.gov (United States)

    Becker, A.; Hansen, V.

    2003-05-01

    In this paper a hybrid method combining the FDTD/FIT with a Time Domain Boundary-Integral Marching-on-in-Time Algorithm (TD-BIM) is presented. Inhomogeneous regions are modelled with the FIT-method, an alternative formulation of the FDTD. Homogeneous regions (which is in the presented numerical example the open space) are modelled using a TD-BIM with equivalent electric and magnetic currents flowing on the boundary between the inhomogeneous and the homogeneous regions. The regions are coupled by the tangential magnetic fields just outside the inhomogeneous regions. These fields are calculated by making use of a Mixed Potential Integral Formulation for the magnetic field. The latter consists of equivalent electric and magnetic currents on the boundary plane between the homogeneous and the inhomogeneous region. The magnetic currents result directly from the electric fields of the Yee lattice. Electric currents in the same plane are calculated by making use of the TD-BIM and using the electric field of the Yee lattice as boundary condition. The presented hybrid method only needs the interpolations inherent in FIT and no additional interpolation. A numerical result is compared to a calculation that models both regions with FDTD.

  4. A hybrid method combining the FDTD and a time domain boundary-integral equation marching-on-in-time algorithm

    Directory of Open Access Journals (Sweden)

    A. Becker

    2003-01-01

    Full Text Available In this paper a hybrid method combining the FDTD/FIT with a Time Domain Boundary-Integral Marching-on-in-Time Algorithm (TD-BIM is presented. Inhomogeneous regions are modelled with the FIT-method, an alternative formulation of the FDTD. Homogeneous regions (which is in the presented numerical example the open space are modelled using a TD-BIM with equivalent electric and magnetic currents flowing on the boundary between the inhomogeneous and the homogeneous regions. The regions are coupled by the tangential magnetic fields just outside the inhomogeneous regions. These fields are calculated by making use of a Mixed Potential Integral Formulation for the magnetic field. The latter consists of equivalent electric and magnetic currents on the boundary plane between the homogeneous and the inhomogeneous region. The magnetic currents result directly from the electric fields of the Yee lattice. Electric currents in the same plane are calculated by making use of the TD-BIM and using the electric field of the Yee lattice as boundary condition. The presented hybrid method only needs the interpolations inherent in FIT and no additional interpolation. A numerical result is compared to a calculation that models both regions with FDTD.

  5. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization.

    Science.gov (United States)

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-09-23

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone's acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  6. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization

    Directory of Open Access Journals (Sweden)

    Guoliang Chen

    2015-09-01

    Full Text Available Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  7. An algorithm for analytical solution of basic problems featuring elastostatic bodies with cavities and surface flaws

    Science.gov (United States)

    Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.

    2018-03-01

    Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.

  8. An Orthogonal Projection Algorithm to Suppress Interference in High-Frequency Surface Wave Radar

    Directory of Open Access Journals (Sweden)

    Zezong Chen

    2018-03-01

    Full Text Available High-frequency surface wave radar (HFSWR has been widely applied in sea-state monitoring, and its performance is known to suffer from various unwanted interferences and clutters. Radio frequency interference (RFI from other radiating sources and ionospheric clutter dominate the various types of unwanted signals because the HF band is congested with many users and the ionosphere propagates interference from distant sources. In this paper, various orthogonal projection schemes are summarized, and three new schemes are proposed for interference cancellation. Simulations and field data recorded by experimental multi-frequency HFSWR from Wuhan University are used to evaluate the cancellation performances of these schemes with respect to both RFI and ionospheric clutter. The processing results may provide a guideline for identifying the appropriate orthogonal projection cancellation schemes in various HFSWR applications.

  9. Eco-hydrological process simulations within an integrated surface water-groundwater model

    DEFF Research Database (Denmark)

    Butts, Michael; Loinaz, Maria Christina; Bauer-Gottwein, Peter

    2014-01-01

    Integrated water resources management requires tools that can quantify changes in groundwater, surface water, water quality and ecosystem health, as a result of changes in catchment management. To address these requirements we have developed an integrated eco-hydrological modelling framework...... that allows hydrologists and ecologists to represent the complex and dynamic interactions occurring between surface water, ground water, water quality and freshwater ecosystems within a catchment. We demonstrate here the practical application of this tool to two case studies where the interaction of surface...... water and ground water are important for the ecosystem. In the first, simulations are performed to understand the importance of surface water-groundwater interactions for a restored riparian wetland on the Odense River in Denmark as part of a larger investigation of water quality and nitrate retention...

  10. Planar integrated optical methods for examining thin films and their surface adlayers.

    Science.gov (United States)

    Plowman, T E; Saavedra, S S; Reichert, W M

    1998-03-01

    Thin film integrated optical waveguides (IOWs) have gained acceptance as a method for characterizing ultrathin dielectrical films and adlayers bound to the film surface. Here, we present the expressions that govern IOW methods as well as describe the common experimental configurations used in attenuated total reflection, fluorescence and Raman applications. The applications of these techniques to the study of adsorbed or surface-bound proteins to polymer and glass waveguides are reviewed.

  11. Cuckoo Search Algorithm with Lévy Flights for Global-Support Parametric Surface Approximation in Reverse Engineering

    Directory of Open Access Journals (Sweden)

    Andrés Iglesias

    2018-03-01

    Full Text Available This paper concerns several important topics of the Symmetry journal, namely, computer-aided design, computational geometry, computer graphics, visualization, and pattern recognition. We also take advantage of the symmetric structure of the tensor-product surfaces, where the parametric variables u and v play a symmetric role in shape reconstruction. In this paper we address the general problem of global-support parametric surface approximation from clouds of data points for reverse engineering applications. Given a set of measured data points, the approximation is formulated as a nonlinear continuous least-squares optimization problem. Then, a recent metaheuristics called Cuckoo Search Algorithm (CSA is applied to compute all relevant free variables of this minimization problem (namely, the data parameters and the surface poles. The method includes the iterative generation of new solutions by using the Lévy flights to promote the diversity of solutions and prevent stagnation. A critical advantage of this method is its simplicity: the CSA requires only two parameters, many fewer than any other metaheuristic approach, so the parameter tuning becomes a very easy task. The method is also simple to understand and easy to implement. Our approach has been applied to a benchmark of three illustrative sets of noisy data points corresponding to surfaces exhibiting several challenging features. Our experimental results show that the method performs very well even for the cases of noisy and unorganized data points. Therefore, the method can be directly used for real-world applications for reverse engineering without further pre/post-processing. Comparative work with the most classical mathematical techniques for this problem as well as a recent modification of the CSA called Improved CSA (ICSA is also reported. Two nonparametric statistical tests show that our method outperforms the classical mathematical techniques and provides equivalent results to ICSA

  12. The Parallel SBAS-DInSAR algorithm: an effective and scalable tool for Earth's surface displacement retrieval

    Science.gov (United States)

    Zinno, Ivana; De Luca, Claudio; Elefante, Stefano; Imperatore, Pasquale; Manunta, Michele; Casu, Francesco

    2014-05-01

    been carried out on real data acquired by ENVISAT and COSMO-SkyMed sensors. Moreover, the P-SBAS performances with respect to the size of the input dataset will also be investigated. This kind of analysis is essential for assessing the goodness of the P-SBAS algorithm and gaining insight into its applicability to different scenarios. Besides, such results will also become crucial to identify and evaluate how to appropriately exploit P-SBAS to process the forthcoming large Sentinel-1 data stream. References [1] Massonnet, D., Briole, P., Arnaud, A., "Deflation of Mount Etna monitored by Spaceborne Radar Interferometry", Nature, vol. 375, pp. 567-570, 1995. [2] Berardino, P., G. Fornaro, R. Lanari, and E. Sansosti, "A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms", IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002. [3] Elefante, S., Imperatore, P. , Zinno, I., M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, F. Casu, "SBAS-DINSAR Time series generation on cloud computing platforms", IEEE IGARSS 2013, July 2013, Melbourne (AU). [4] Zinno, P. Imperatore, S. Elefante, F. Casu, M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, "A Novel Parallel Computational Framework for Processing Large INSAR Data Sets", Living Planet Symposium 2013, Sept. 9-13, 2013.

  13. Surface integrity analysis of abrasive water jet-cut surfaces of friction stir welded joints

    Czech Academy of Sciences Publication Activity Database

    Kumar, R.; Chattopadhyaya, S.; Dixit, A. R.; Bora, B.; Zeleňák, Michal; Foldyna, Josef; Hloch, Sergej; Hlaváček, Petr; Ščučka, Jiří; Klich, Jiří; Sitek, Libor; Vilaca, P.

    2017-01-01

    Roč. 88, č. 5 (2017), s. 1687-1701 ISSN 0268-3768 R&D Projects: GA MŠk(CZ) LO1406; GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : friction stir welding (FSW) * abrasive water jet (AWJ) * optical profilometer * topography * surface roughness Subject RIV: JQ - Machines ; Tools OBOR OECD: Mechanical engineering Impact factor: 2.209, year: 2016 http://link.springer.com/article/10.1007/s00170-016-8776-0

  14. Experiences With an Optimal Estimation Algorithm for Surface and Atmospheric Parameter Retrieval From Passive Microwave Data in the Arctic

    DEFF Research Database (Denmark)

    Scarlat, Raul Cristian; Heygster, Georg; Pedersen, Leif Toudal

    2017-01-01

    the brightness temperatures observed by a passive microwave radiometer. The retrieval method inverts the forward model and produces ensembles of the seven parameters, wind speed, integrated water vapor, liquid water path, sea and ice temperature, sea ice concentration and multiyear ice fraction. The method...... compared with the Arctic Systems Reanalysis model data as well as columnar water vapor retrieved from satellite microwave sounders and the Remote Sensing Systems AMSR-E ocean retrieval product in order to determine the feasibility of using the same setup over pure surface with 100% and 0% sea ice cover......, respectively. Sea ice concentration retrieval shows good skill for pure surface cases. Ice types retrieval is in good agreement with scatterometer backscatter data. Deficiencies have been identified in using the forward model over sea ice for retrieving atmospheric parameters, that are connected...

  15. Path integral molecular dynamics with surface hopping for thermal equilibrium sampling of nonadiabatic systems.

    Science.gov (United States)

    Lu, Jianfeng; Zhou, Zhennan

    2017-04-21

    In this work, a novel ring polymer representation for a multi-level quantum system is proposed for thermal average calculations. The proposed representation keeps the discreteness of the electronic states: besides position and momentum, each bead in the ring polymer is also characterized by a surface index indicating the electronic energy surface. A path integral molecular dynamics with surface hopping (PIMD-SH) dynamics is also developed to sample the equilibrium distribution of the ring polymer configurational space. The PIMD-SH sampling method is validated theoretically and by numerical examples.

  16. Photonic integrated single-sideband modulator / frequency shifter based on surface acoustic waves

    DEFF Research Database (Denmark)

    Barretto, Elaine Cristina Saraiva; Hvam, Jørn Märcher

    2010-01-01

    Optical frequency shifters are essential components of many systems. In this paper, a compact integrated optical frequency shifter is designed making use of the combination of surface acoustic waves and Mach-Zehnder interferometers. It has a very simple operation setup and can be fabricated in st...

  17. Integrating GPR and RIP Methods for Water Surface Detection of Geological Structures

    Directory of Open Access Journals (Sweden)

    Chieh-Hou Yang

    2006-01-01

    Full Text Available Geophysical surveying in water-covered and swampy areas is particularly challenging. This paper presents a new survey strategy for such surveying that integrates ground penetrating radar (GPR and resistivity image profiling (RIP methods at the water surface to investigate geologic structures beneath rivers, ponds, and swamps.

  18. MIKE-SHE integrated groundwater and surface water model used to ...

    African Journals Online (AJOL)

    2016-07-03

    Jul 3, 2016 ... INTRODUCTION. Physically-based, distributed-parameter integrated hydrologic codes, such as MIKE SHE/MIKE11, that simulate fully coupled groundwater and surface water flows, represent the best available tools to simulate hydrologic flow systems for EWR studies and water management, because they ...

  19. Botswana water and surface energy balance research program. Part 1: Integrated approach and field campaign results

    Science.gov (United States)

    Vandegriend, A. A.; Owe, M.; Vugts, H. F.; Ramothwa, G. K.

    1992-01-01

    The Botswana water and surface energy balance research program was developed to study and evaluate the integrated use of multispectral satellite remote sensing for monitoring the hydrological status of the Earth's surface. Results of the first part of the program (Botswana 1) which ran from 1 Jan. 1988 - 31 Dec. 1990 are summarized. Botswana 1 consisted of two major, mutually related components: a surface energy balance modeling component, built around an extensive field campaign; and a passive microwave research component which consisted of a retrospective study of large scale moisture conditions and Nimbus scanning multichannel microwave radiometer microwave signatures. The integrated approach of both components in general are described and activities performed during the surface energy modeling component including the extensive field campaign are summarized. The results of the passive microwave component are summarized. The key of the field campaign was a multilevel approach, whereby measurements by various similar sensors were made at several altitudes and resolution. Data collection was performed at two adjacent sites of contrasting surface character. The following measurements were made: micrometeorological measurements, surface temperatures, soil temperatures, soil moisture, vegetation (leaf area index and biomass), satellite data, aircraft data, atmospheric soundings, stomatal resistance, and surface emissivity.

  20. The Algorithm Theoretical Basis Document for the Derivation of Range and Range Distributions from Laser Pulse Waveform Analysis for Surface Elevations, Roughness, Slope, and Vegetation Heights

    Science.gov (United States)

    Brenner, Anita C.; Zwally, H. Jay; Bentley, Charles R.; Csatho, Bea M.; Harding, David J.; Hofton, Michelle A.; Minster, Jean-Bernard; Roberts, LeeAnne; Saba, Jack L.; Thomas, Robert H.; hide

    2012-01-01

    The primary purpose of the GLAS instrument is to detect ice elevation changes over time which are used to derive changes in ice volume. Other objectives include measuring sea ice freeboard, ocean and land surface elevation, surface roughness, and canopy heights over land. This Algorithm Theoretical Basis Document (ATBD) describes the theory and implementation behind the algorithms used to produce the level 1B products for waveform parameters and global elevation and the level 2 products that are specific to ice sheet, sea ice, land, and ocean elevations respectively. These output products, are defined in detail along with the associated quality, and the constraints, and assumptions used to derive them.

  1. Support surfaces in the prevention of pressure ulcers in surgical patients: An integrative review.

    Science.gov (United States)

    de Oliveira, Karoline Faria; Nascimento, Kleiton Gonçalves; Nicolussi, Adriana Cristina; Chavaglia, Suzel Regina Ribeiro; de Araújo, Cleudmar Amaral; Barbosa, Maria Helena

    2017-08-01

    To assess the scientific evidence about the types of support surfaces used in intraoperative surgical practice in the prevention of pressure ulcers due to surgical positioning. This is an integrative literature review. The electronic databases Cochrane, PubMed, Web of Science, Scopus, Lilacs, and CINAHL were used. The descriptors surgical patients, support surfaces, perioperative care, patient positioning, and pressure ulcer were used in the search strategy. Articles that addressed the use of support surfaces intraoperatively, published between 1990 and 2016, were selected. The PRISMA guidelines were used to structure the review. Of 18 evaluated studies, most were in English, followed by Portuguese and Spanish; most were performed by nurses. The most commonly cited support surfaces were viscoelastic polymer, micropulse mattresses, gel based mattresses, and foam devices. There are gaps in knowledge regarding the most efficient support surfaces and the specifications of the products used to prevent pressure ulcers due to surgical positioning. © 2017 John Wiley & Sons Australia, Ltd.

  2. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA to estimate actual evapotranspiration over heterogeneous terrain

    Directory of Open Access Journals (Sweden)

    Z. Q. Gao

    2011-01-01

    Full Text Available Evapotranspiration (ET may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA. With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM, and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  3. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA) to estimate actual evapotranspiration over heterogeneous terrain

    Science.gov (United States)

    Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N.-B.

    2011-01-01

    Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement). Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  4. A coupled remote sensing and the Surface Energy Balance with Topography Algorithm (SEBTA) to estimate actual evapotranspiration under complex terrain

    Science.gov (United States)

    Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.

    2010-07-01

    Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.

  5. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    Science.gov (United States)

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to

  6. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    Science.gov (United States)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  7. Investigating the wetting behavior of a surface with periodic reentrant structures using integrated microresonators

    Science.gov (United States)

    Klingel, S.; Oesterschulze, E.

    2017-08-01

    The apparent contact angle is frequently used as an indicator of the wetting state of a surface in contact with a liquid. However, the apparent contact angle is subject to hysteresis that depends furthermore strongly on both the material properties and the roughness and structure of the sample surface. In this work, we show that integrated microresonators can be exploited to determine the wetting state by measuring both the frequency shift caused by the hydrodynamic mass of the liquid and the change in the quality factor as a result of damping. For this, we integrated electrically driven hybrid bridge resonators (HBRs) into a periodically structured surface intended for wetting experiments. We could clearly differentiate between the Wenzel state and the Cassie-Baxter state because the resonant frequency and quality factor of the HBR changed by over 35% and 40%, respectively. This offers the capability to unambiguously distinguish between the different wetting states.

  8. TLC surface integrity affects the detection of alkali adduct ions in TLC-MALDI analysis.

    Science.gov (United States)

    Dong, Yonghui; Ferrazza, Ruggero; Anesi, Andrea; Guella, Graziano; Franceschi, Pietro

    2017-09-01

    Direct coupling of thin-layer chromatography (TLC) with matrix-assisted laser desorption ionization (MALDI) mass spectrometry allows fast and detailed characterization of a large variety of analytes. The use of this technique, however, presents great challenges in semiquantitative applications because of the complex phenomena occurring at the TLC surface. In our laboratory, we recently observed that the ion intensities of several alkali adduct ions were significantly different between the top and interior layer of the TLC plate. This indicates that the integrity of the TLC surface can have an important effect on the reproducibility of TLC- MALDI analyses. Graphical Abstract MALDI imaging reveals that surface integrity affects the detection of alkali adductions in TLC-MALDI.

  9. A fuzzy-random programming for integrated closed-loop logistics network design by using priority-based genetic algorithm

    Directory of Open Access Journals (Sweden)

    Keyvan Kamandanipour

    2013-01-01

    Full Text Available Recovery of used products has steadily become interesting issue for research due to economic reasons and growing environmental or legislative concern. This paper presents a closed-loop logistics network design based on reverse logistics models. A mixed integer linear programming model is implemented to integrate logistics network design in order to prevent the sub-optimality caused by the separate design of the forward and reverse networks. The study presents a single product and multi-stage logistics network problem for the new and return products not only to determine subsets of logistics centers to be opened, but also to determine transportation strategy, which satisfies demand imposed by facilities and minimizes fixed opening and total shipping costs. Since the deterministic estimation of some parameters such as demand and rate of return of used products in closed loop logistics models is impractical, an uncertain programming is proposed. In this case, we assume there are several economic conditions with predefined probabilities calculated from historical data. Then by means of expert's opinion, a fuzzy variable is offered as customer's demand under each economic condition. In addition, demand and rate of return of products for each customer zone is presented by fuzzy-random variables, similarly. Therefore, a fuzzy-random programming is used and a priority-based genetic algorithm is proposed to solve large-scale problems.

  10. A Double-Smoothing Algorithm for Integrating Satellite Precipitation Products in Areas with Sparsely Distributed In Situ Networks

    Directory of Open Access Journals (Sweden)

    Shuoben Bi

    2017-01-01

    Full Text Available The spatial distribution of automatic weather stations in regions of western China (e.g., Tibet and southern Xingjiang is relatively sparse. Due to the considerable spatial variability of precipitation, estimations of rainfall that are interpolated in these areas exhibit considerable uncertainty based on the current observational networks. In this paper, a new statistical method for estimating precipitation is introduced that integrates satellite products and in situ observation data. This method calculates the differences between raster data and point data based on the theory of data assimilation. In regions in which the spatial distribution of automatic weather stations is sparse, a nonparametric kernel-smoothing method is adopted to process the discontinuous data through correction and spatial interpolation. A comparative analysis of the fusion method based on the double-smoothing algorithm proposed here indicated that the method performed better than those used in previous studies based on the average deviation, root mean square error, and correlation coefficient values. Our results indicate that the proposed method is more rational and effective in terms of both the efficiency coefficient and the spatial distribution of the deviations.

  11. PubFocus: semantic MEDLINE/PubMed citations analytics through integration of controlled biomedical dictionaries and ranking algorithm

    Directory of Open Access Journals (Sweden)

    Chuong Cheng-Ming

    2006-10-01

    Full Text Available Abstract Background Understanding research activity within any given biomedical field is important. Search outputs generated by MEDLINE/PubMed are not well classified and require lengthy manual citation analysis. Automation of citation analytics can be very useful and timesaving for both novices and experts. Results PubFocus web server automates analysis of MEDLINE/PubMed search queries by enriching them with two widely used human factor-based bibliometric indicators of publication quality: journal impact factor and volume of forward references. In addition to providing basic volumetric statistics, PubFocus also prioritizes citations and evaluates authors' impact on the field of search. PubFocus also analyses presence and occurrence of biomedical key terms within citations by utilizing controlled vocabularies. Conclusion We have developed citations' prioritisation algorithm based on journal impact factor, forward referencing volume, referencing dynamics, and author's contribution level. It can be applied either to the primary set of PubMed search results or to the subsets of these results identified through key terms from controlled biomedical vocabularies and ontologies. NCI (National Cancer Institute thesaurus and MGD (Mouse Genome Database mammalian gene orthology have been implemented for key terms analytics. PubFocus provides a scalable platform for the integration of multiple available ontology databases. PubFocus analytics can be adapted for input sources of biomedical citations other than PubMed.

  12. Effect of processing parameters of rotary ultrasonic machining on surface integrity of potassium dihydrogen phosphate crystals

    Directory of Open Access Journals (Sweden)

    Jianfu Zhang

    2015-09-01

    Full Text Available Potassium dihydrogen phosphate is an important optical crystal. However, high-precision processing of large potassium dihydrogen phosphate crystal workpieces is difficult. In this article, surface roughness and subsurface damage characteristics of a (001 potassium dihydrogen phosphate crystal surface produced by traditional and rotary ultrasonic machining are studied. The influence of process parameters, including spindle speed, feed speed, type and size of sintered diamond wheel, ultrasonic power, and selection of cutting fluid on potassium dihydrogen phosphate crystal surface integrity, was analyzed. The surface integrity, especially the subsurface damage depth, was affected significantly by the ultrasonic power. Metal-sintered diamond tools with high granularity were most suitable for machining potassium dihydrogen phosphate crystal. Cutting fluid played a key role in potassium dihydrogen phosphate crystal machining. A more precise surface can be obtained in machining with a higher spindle speed, lower feed speed, and using kerosene as cutting fluid. Based on the provided optimized process parameters for machining potassium dihydrogen phosphate crystal, a processed surface quality with Ra value of 33 nm and subsurface damage depth value of 6.38 μm was achieved.

  13. Cardiac MRI in mice at 9.4 Tesla with a transmit-receive surface coil and a cardiac-tailored intensity-correction algorithm.

    Science.gov (United States)

    Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi

    2007-08-01

    To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.

  14. Integration of texture and disparity cues to surface slant in dorsal visual cortex.

    Science.gov (United States)

    Murphy, Aidan P; Ban, Hiroshi; Welchman, Andrew E

    2013-07-01

    Reliable estimation of three-dimensional (3D) surface orientation is critical for recognizing and interacting with complex 3D objects in our environment. Human observers maximize the reliability of their estimates of surface slant by integrating multiple depth cues. Texture and binocular disparity are two such cues, but they are qualitatively very different. Existing evidence suggests that representations of surface tilt from each of these cues coincide at the single-neuron level in higher cortical areas. However, the cortical circuits responsible for 1) integration of such qualitatively distinct cues and 2) encoding the slant component of surface orientation have not been assessed. We tested for cortical responses related to slanted plane stimuli that were defined independently by texture, disparity, and combinations of these two cues. We analyzed the discriminability of functional MRI responses to two slant angles using multivariate pattern classification. Responses in visual area V3B/KO to stimuli containing congruent cues were more discriminable than those elicited by single cues, in line with predictions based on the fusion of slant estimates from component cues. This improvement was specific to congruent combinations of cues: incongruent cues yielded lower decoding accuracies, which suggests the robust use of individual cues in cases of large cue conflicts. These data suggest that area V3B/KO is intricately involved in the integration of qualitatively dissimilar depth cues.

  15. Reduced-volume antennas with integrated high-impedance electromagnetic surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Forman, Michael A.

    2006-11-01

    Several antennas with integrated high-impedance surfaces are presented. The high-impedance surface is implemented as a composite right/left-handed (CRLH) metamaterial fabricated from a periodic structure characterized by a substrate, filled with an array of vertical vias and capped by capacitive patches. Omnidirectional antennas placed in close proximity to the high-impedance surface radiate hemispherically with an increase in boresight far-field pattern gain of up to 10 dB and a front-to-back ratio as high as 13 dB at 2.45 GHz. Several TEM rectangular horn antennas are realized by replacing conductor walls with high-impedance surfaces. The TEM horn antennas are capable of operating below the TE{sub 1,0} cutoff frequency of a standard all-metal horn antenna, enabling a reduction in antenna volume. Above the cutoff frequency the TEM horn antennas function similarly to standard rectangular horn antennas.

  16. Muscle-tendon units localization and activation level analysis based on high-density surface EMG array and NMF algorithm

    Science.gov (United States)

    Huang, Chengjun; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-12-01

    Objective. Some skeletal muscles can be subdivided into smaller segments called muscle-tendon units (MTUs). The purpose of this paper is to propose a framework to locate the active region of the corresponding MTUs within a single skeletal muscle and to analyze the activation level varieties of different MTUs during a dynamic motion task. Approach. Biceps brachii and gastrocnemius were selected as targeted muscles and three dynamic motion tasks were designed and studied. Eight healthy male subjects participated in the data collection experiments, and 128-channel surface electromyographic (sEMG) signals were collected with a high-density sEMG electrode grid (a grid consists of 8 rows and 16 columns). Then the sEMG envelopes matrix was factorized into a matrix of weighting vectors and a matrix of time-varying coefficients by nonnegative matrix factorization algorithm. Main results. The experimental results demonstrated that the weightings vectors, which represent invariant pattern of muscle activity across all channels, could be used to estimate the location of MTUs and the time-varying coefficients could be used to depict the variation of MTUs activation level during dynamic motion task. Significance. The proposed method provides one way to analyze in-depth the functional state of MTUs during dynamic tasks and thus can be employed on multiple noteworthy sEMG-based applications such as muscle force estimation, muscle fatigue research and the control of myoelectric prostheses. This work was supported by the National Nature Science Foundation of China under Grant 61431017 and 61271138.

  17. An Integrated Transcriptome-Wide Analysis of Cave and Surface Dwelling Astyanax mexicanus

    Science.gov (United States)

    Gross, Joshua B.; Furterer, Allison; Carlson, Brian M.; Stahl, Bethany A.

    2013-01-01

    Numerous organisms around the globe have successfully adapted to subterranean environments. A powerful system in which to study cave adaptation is the freshwater characin fish, Astyanax mexicanus. Prior studies in this system have established a genetic basis for the evolution of numerous regressive traits, most notably vision and pigmentation reduction. However, identification of the precise genetic alterations that underlie these morphological changes has been delayed by limited genetic and genomic resources. To address this, we performed a transcriptome analysis of cave and surface dwelling Astyanax morphs using Roche/454 pyrosequencing technology. Through this approach, we obtained 576,197 Pachón cavefish-specific reads and 438,978 surface fish-specific reads. Using this dataset, we assembled transcriptomes of cave and surface fish separately, as well as an integrated transcriptome that combined 1,499,568 reads from both morphotypes. The integrated assembly was the most successful approach, yielding 22,596 high quality contiguous sequences comprising a total transcriptome length of 21,363,556 bp. Sequence identities were obtained through exhaustive blast searches, revealing an adult transcriptome represented by highly diverse Gene Ontology (GO) terms. Our dataset facilitated rapid identification of sequence polymorphisms between morphotypes. These data, along with positional information collected from the Danio rerio genome, revealed several syntenic regions between Astyanax and Danio. We demonstrated the utility of this positional information through a QTL analysis of albinism in a surface x Pachón cave F2 pedigree, using 65 polymorphic markers identified from our integrated assembly. We also adapted our dataset for an RNA-seq study, revealing many genes responsible for visual system maintenance in surface fish, whose expression was not detected in adult Pachón cavefish. Conversely, several metabolism-related genes expressed in cavefish were not detected in

  18. Redatuming borehole-to-surface electromagnetic data using Stratton-Chu integral transforms

    DEFF Research Database (Denmark)

    Zhdanov, Michael; Cai, Hongzhu

    2012-01-01

    We present a new method of analyzing borehole-to-surface electromagnetic (BSEM) survey data based on redatuming of the observed data from receivers distributed over the surface of the earth onto virtual receivers located within the subsurface. The virtual receivers can be placed close to the target...... of interest, such as just above a hydrocarbon reservoir, which increases the sensitivity of the EM data to the target. The method is based on the principles of downward analytical continuation of EM fields. We use Stratton-Chu type integral transforms to calculate the EM fields at the virtual receivers. Model...

  19. Green's function surface-integral method for nonlocal response of plasmonic nanowires in arbitrary dielectric environments

    DEFF Research Database (Denmark)

    Yan, Wei; Mortensen, N. Asger; Wubs, Martijn

    2013-01-01

    We develop a nonlocal-response generalization to the Green's function surface-integral method (GSIM), also known as the boundary-element method. This numerically efficient method can accurately describe the linear hydrodynamic nonlocal response of arbitrarily shaped plasmonic nanowires in arbitrary...... and the longitudinal wave number become smaller, or when the effective background permittivity or the mode inhomogeneity increase. The inhomogeneity can be expressed in terms of an effective angular momentum of the surface-plasmon mode. We compare local and nonlocal response of freestanding nanowires, and of nanowires...

  20. An Electric Field Volume Integral Equation Approach to Simulate Surface Plasmon Polaritons

    Directory of Open Access Journals (Sweden)

    R. Remis

    2013-02-01

    Full Text Available In this paper we present an electric field volume integral equation approach to simulate surface plasmon propagation along metal/dielectric interfaces. Metallic objects embedded in homogeneous dielectric media are considered. Starting point is a so-called weak-form of the electric field integral equation. This form is discretized on a uniform tensor-product grid resulting in a system matrix whose action on a vector can be computed via the fast Fourier transform. The GMRES iterative solver is used to solve the discretized set of equations and numerical examples, illustrating surface plasmon propagation, are presented. The convergence rate of GMRES is discussed in terms of the spectrum of the system matrix and through numerical experiments we show how the eigenvalues of the discretized volume scattering operator are related to plasmon propagation and the medium parameters of a metallic object.

  1. Solution of volume-surface integral equations using higher-order hierarchical Legendre basis functions

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Meincke, Peter; Breinbjerg, Olav

    2007-01-01

    The problem of electromagnetic scattering by composite metallic and dielectric objects is solved using the coupled volume-surface integral equation (VSIE). The method of moments (MoM) based on higher-order hierarchical Legendre basis functions and higher-order curvilinear geometrical elements...... with the analytical Mie series solution. Scattering by more complex metal-dielectric objects are also considered to compare the presented technique with other numerical methods....

  2. Knowledge discovery and sequence-based prediction of pandemic influenza using an integrated classification and association rule mining (CBA) algorithm.

    Science.gov (United States)

    Kargarfard, Fatemeh; Sami, Ashkan; Ebrahimie, Esmaeil

    2015-10-01

    Pandemic influenza is a major concern worldwide. Availability of advanced technologies and the nucleotide sequences of a large number of pandemic and non-pandemic influenza viruses in 2009 provide a great opportunity to investigate the underlying rules of pandemic induction through data mining tools. Here, for the first time, an integrated classification and association rule mining algorithm (CBA) was used to discover the rules underpinning alteration of non-pandemic sequences to pandemic ones. We hypothesized that the extracted rules can lead to the development of an efficient expert system for prediction of influenza pandemics. To this end, we used a large dataset containing 5373 HA (hemagglutinin) segments of the 2009 H1N1 pandemic and non-pandemic influenza sequences. The analysis was carried out for both nucleotide and protein sequences. We found a number of new rules which potentially present the undiscovered antigenic sites at influenza structure. At the nucleotide level, alteration of thymine (T) at position 260 was the key discriminating feature in distinguishing non-pandemic from pandemic sequences. At the protein level, rules including I233K, M334L were the differentiating features. CBA efficiently classifies pandemic and non-pandemic sequences with high accuracy at both the nucleotide and protein level. Finding hotspots in influenza sequences is a significant finding as they represent the regions with low antibody reactivity. We argue that the virus breaks host immunity response by mutation at these spots. Based on the discovered rules, we developed the software, "Prediction of Pandemic Influenza" for discrimination of pandemic from non-pandemic sequences. This study opens a new vista in discovery of association rules between mutation points during evolution of pandemic influenza. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. [Interventional emergency embolization for severe pelvic ring fractures with arterial bleeding. Integration into the early clinical treatment algorithm].

    Science.gov (United States)

    Westhoff, J; Laurer, H; Wutzler, S; Wyen, H; Mack, M; Maier, B; Marzi, I

    2008-10-01

    Presentation of our own experiences and results of an early clinical algorithm for treatment integrating emergency embolization (TAE) in cases of unstable pelvic ring fractures with arterial bleeding. Consecutive patient series from April 2002 to December 2006 at a level 1 trauma center. The data of the online shock room documentation (Traumawatch) of patients with a pelvic fracture and arterial bleeding detected on multislice computed tomography (MSCT) were examined for the following parameters: demographic data, injury mechanism, fracture classification according to Tile/AO and severity of the pelvic injury assessed with the Abbreviated Injury Score (AIS), accompanying injuries with elevation of the cumulative injury severity according to the Injury Severity Score (ISS), physiological admission parameters (circulatory parameters and initial Hb value) as well as transfusion requirement during treatment in the shock room, time until embolization, duration of embolization, and source of bleeding. Of a total of 162 patients, arterial bleeding was detected in 21 patients by contrast medium extravasation on MSCT, 12 of whom were men and 9 women with an average age of 45 (14-80) years. The mechanism of injury was high energy trauma in all cases. In 33% it involved type B pelvic fractures and in 67% type C fractures with an average AIS pelvis of 4.4 points (3-5) and a total severity of injury with the ISS of 37 points (21-66). Upon admission 47.6% presented hemodynamic instability with an average Hb value of 7.8 g/dl (3.2-12.4) and an average transfusion requirement of 6 red blood cell units (4-13). The time until the TAE was started was on average 62 min (25-115) with a duration period of the TAE of 25 min (15-67). Branches of the internal iliac artery were identified as the sole source of bleeding. The success rate of TAE amounted to over 90%. Interventional TAE represents an effective as well as a fast procedure for hemostasis of arterial bleeding detected on MSCT in

  4. Application of the nonlinear time series prediction method of genetic algorithm for forecasting surface wind of point station in the South China Sea with scatterometer observations

    International Nuclear Information System (INIS)

    Zhong Jian; Dong Gang; Sun Yimei; Zhang Zhaoyang; Wu Yuqin

    2016-01-01

    The present work reports the development of nonlinear time series prediction method of genetic algorithm (GA) with singular spectrum analysis (SSA) for forecasting the surface wind of a point station in the South China Sea (SCS) with scatterometer observations. Before the nonlinear technique GA is used for forecasting the time series of surface wind, the SSA is applied to reduce the noise. The surface wind speed and surface wind components from scatterometer observations at three locations in the SCS have been used to develop and test the technique. The predictions have been compared with persistence forecasts in terms of root mean square error. The predicted surface wind with GA and SSA made up to four days (longer for some point station) in advance have been found to be significantly superior to those made by persistence model. This method can serve as a cost-effective alternate prediction technique for forecasting surface wind of a point station in the SCS basin. (paper)

  5. Fully Integrated Atmospheric, Surface, and Subsurface Model of the California Basin

    Science.gov (United States)

    Davison, J. H.; Hwang, H. T.; Sudicky, E. A.; Mallia, D. V.; Lin, J. C.

    2016-12-01

    The recent drought in the Western United States has crippled agriculture in California's Central Valley. Farmers, facing reduced surface water flow, have turned to groundwater as their primary solution to the water crisis. However, the unsustainable pumping rates seen throughout California have drastically decreased the surface and subsurface water levels. For this reason, we developed a coupled subsurface, surface, and atmospheric model for the entire California Basin that captures the feedbacks between the three domains at an extremely high spatial and temporal resolution. Our coupled model framework integrates HydroGeoSphere (HGS), a fully implicit three-dimensional control-volume finite element surface and variably saturated subsurface model with evapotranspiration process, to Weather Research and Forecasting (WRF), a three-dimensional mesoscale nonhydrostatic atmospheric model. HGS replaces the land surface component within WRF, and provides WRF with the actual evapotranspiration (AET) and soil saturation. In return, WRF provides HGS with the potential evapotranspiration (PET) and precipitation fluxes. The flexible coupling technique allows HGS and WRF to have unique meshing and projection characteristics and links the domains based on their geographic coordinates (i.e., latitude and longitude). The California Basin model successfully simulated similar drawdown rates to the Gravity Recovery and Climate Experiment (GRACE) and replicated the Klamath and Sacramento River hydrographs. Furthermore, our simulation results reproduced field measured precipitation and evapotranspiration. Currently, our coupled California Basin model is the most complete water resource simulator because we combine the surface, subsurface, and atmosphere into a single domain.

  6. Rewetting analysis of hot surfaces with internal heat source by the heat balance integral method

    Energy Technology Data Exchange (ETDEWEB)

    Sahu, S.K.; Das, P.K.; Bhattacharyya, Souvik [IIT Kharagpur (India). Department of Mechanical Engineering

    2008-08-15

    A two region conduction-controlled rewetting model of hot vertical surfaces with internal heat generation and boundary heat flux subjected to constant but different heat transfer coefficient in both wet and dry region is solved by the Heat Balance Integral Method (HBIM). The HBIM yields the temperature field and quench front temperature as a function of various model parameters such as Peclet number, Biot number and internal heat source parameter of the hot surface. Further, the critical (dry out) internal heat source parameter is obtained by setting Peclet number equal to zero, which yields the minimum internal heat source parameter to prevent the hot surface from being rewetted. Using this method, it has been possible to derive a unified relationship for a two-dimensional slab and tube with both internal heat generation and boundary heat flux. The solutions are found to be in good agreement with other analytical results reported in literature. (orig.)

  7. Integrated modelling for assessing the risk of groundwater contaminants to human health and surface water ecosystems

    DEFF Research Database (Denmark)

    McKnight, Ursula S.; Rasmussen, Jes; Funder, Simon G.

    2010-01-01

    of contamination. In particular, adaptive management tools designed to work with sparse data sets from preliminary site assessments are needed which can explicitly link contaminant point sources with groundwater, surface water and ecological impacts. Here, a novel integrated modelling approach was employed...... volatilisation model for the stream. The model is tested on a Danish case study involving a 750 m long TCE groundwater plume discharging into a stream. The initial modelling results indicate that TCE contaminant plumes with μgL-1 concentrations entering surface water systems do not pose a significant risk...... for evaluating the impact of a TCE groundwater plume, located in an area with protected drinking water interests, to human health and surface water ecosystems. This is accomplished by coupling the system dynamicsbased decision support system CARO-Plus to the aquatic ecosystem model AQUATOX via an analytical...

  8. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    Science.gov (United States)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  9. Study of pollutant transport in surface boundary layer by generalized integral transform technique

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, Jesus S.P.; Heilbron Filho, Paulo F.L. [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Pimentel, Luiz C.G. [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil). Dept. de Meteorologia. Lab. de Modelagem de Processos Marinhos e Atmosfericos (LAMMA); Cataldi, Marcio [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE)

    2001-07-01

    A theoretical study was developed to obtain solutions of the atmospheric diffusion equation for various point source, considering radioactive decay and axial diffusion, under neutral atmospheric conditions. It was used an algebraic turbulence model available in the literature, based on Monin-Obukhov similarity theory, for the representation of the turbulent transport in the vertical direction, in the longitudinal directions was considered a constant mass eddy diffusivity . The bi-dimensional transient partial differential equation, representative of the physical phenomena, was transformed into a coupled one-dimensional transient equation system by applying the Generalized Integral Transform Technique. The coupled system was solved numerically using a subroutine based in the lines method. In order to evaluate the computational algorithm were analyzed some representative physical situations. (author)

  10. Study of pollutant transport in surface boundary layer by generalized integral transform technique

    International Nuclear Information System (INIS)

    Guerrero, Jesus S.P.; Heilbron Filho, Paulo F.L.; Pimentel, Luiz C.G.; Cataldi, Marcio

    2001-01-01

    A theoretical study was developed to obtain solutions of the atmospheric diffusion equation for various point source, considering radioactive decay and axial diffusion, under neutral atmospheric conditions. It was used an algebraic turbulence model available in the literature, based on Monin-Obukhov similarity theory, for the representation of the turbulent transport in the vertical direction, in the longitudinal directions was considered a constant mass eddy diffusivity . The bi-dimensional transient partial differential equation, representative of the physical phenomena, was transformed into a coupled one-dimensional transient equation system by applying the Generalized Integral Transform Technique. The coupled system was solved numerically using a subroutine based in the lines method. In order to evaluate the computational algorithm were analyzed some representative physical situations. (author)

  11. Influence of steel implant surface microtopography on soft and hard tissue integration.

    Science.gov (United States)

    Hayes, J S; Klöppel, H; Wieling, R; Sprecher, C M; Richards, R G

    2018-02-01

    After implantation of an internal fracture fixation device, blood contacts the surface, followed by protein adsorption, resulting in either soft-tissue adhesion or matrix adhesion and mineralization. Without protein adsorption and cell adhesion under the presence of micro-motion, fibrous capsule formation can occur, often surrounding a liquid filled void at the implant-tissue interface. Clinically, fibrous capsule formation is more prevalent with electropolished stainless steel (EPSS) plates than with current commercially pure titanium (cpTi) plates. We hypothesize that this is due to lack of micro-discontinuities on the standard EPSS plates. To test our hypothesis, four EPSS experimental surfaces with varying microtopographies were produced and characterized for morphology using the scanning electron microscope, quantitative roughness analysis using laser profilometry and chemical analysis using X-ray photoelectron spectroscopy. Clinically used EPSS (smooth) and cpTi (microrough) were included as controls. Six plates of each type were randomly implanted, one on both the left and right intact tibia of 18 white New Zealand rabbits for 12 weeks, to allow for a surface interface study. The results demonstrate that the micro-discontinuities on the upper surface of internal steel fixation plates reduced the presence of liquid filled voids within soft-tissue capsules. The micro-discontinuities on the plate under-surface increased bony integration without the presence of fibrous tissue interface. These results support the hypothesis that the fibrous capsule and the liquid filled void formation occurs mainly due to lack of micro-discontinuities on the polished smooth steel plates and that bony integration is increased to surfaces with higher amounts of micro-discontinuities. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 705-715, 2018. © 2017 Wiley Periodicals, Inc.

  12. Integrating remotely sensed leaf area index and leaf nitrogen accumulation with RiceGrow model based on particle swarm optimization algorithm for rice grain yield assessment

    Science.gov (United States)

    Wang, Hang; Zhu, Yan; Li, Wenlong; Cao, Weixing; Tian, Yongchao

    2014-01-01

    A regional rice (Oryza sativa) grain yield prediction technique was proposed by integration of ground-based and spaceborne remote sensing (RS) data with the rice growth model (RiceGrow) through a new particle swarm optimization (PSO) algorithm. Based on an initialization/parameterization strategy (calibration), two agronomic indicators, leaf area index (LAI) and leaf nitrogen accumulation (LNA) remotely sensed by field spectra and satellite images, were combined to serve as an external assimilation parameter and integrated with the RiceGrow model for inversion of three model management parameters, including sowing date, sowing rate, and nitrogen rate. Rice grain yield was then predicted by inputting these optimized parameters into the reinitialized model. PSO was used for the parameterization and regionalization of the integrated model and compared with the shuffled complex evolution-University of Arizona (SCE-UA) optimization algorithm. The test results showed that LAI together with LNA as the integrated parameter performed better than each alone for crop model parameter initialization. PSO also performed better than SCE-UA in terms of running efficiency and assimilation results, indicating that PSO is a reliable optimization method for assimilating RS information and the crop growth model. The integrated model also had improved precision for predicting rice grain yield.

  13. A simple iterative method for estimating evapotranspiration with integrated surface/subsurface flow models

    Science.gov (United States)

    Hwang, H.-T.; Park, Y.-J.; Frey, S. K.; Berg, S. J.; Sudicky, E. A.

    2015-12-01

    This work presents an iterative, water balance based approach to estimate actual evapotranspiration (ET) with integrated surface/subsurface flow models. Traditionally, groundwater level fluctuation methods have been widely accepted and used for estimating ET and net groundwater recharge; however, in watersheds where interactions between surface and subsurface flow regimes are highly dynamic, the traditional method may be overly simplistic. Here, an innovative methodology is derived and demonstrated for using the water balance equation in conjunction with a fully-integrated surface and subsurface hydrologic model (HydroGeoSphere) in order to estimate ET at watershed and sub-watershed scales. The method invokes a simple and robust iterative numerical solution. For the proof of concept demonstrations, the method is used to estimate ET for a simple synthetic watershed and then for a real, highly-characterized 7000 km2 watershed in Southern Ontario, Canada (Grand River Watershed). The results for the Grand River Watershed show that with three to five iterations, the solution converges to a result where there is less than 1% relative error in stream flow calibration at 16 stream gauging stations. The spatially-averaged ET estimated using the iterative method shows a high level of agreement (R2 = 0.99) with that from a benchmark case simulated with an ET model embedded directly in HydroGeoSphere. The new approach presented here is applicable to any watershed that is suited for integrated surface water/groundwater flow modelling and where spatially-averaged ET estimates are useful for calibrating modelled stream discharge.

  14. Uniform surface-to-line integral reduction of physical optics for curved surfaces by modified edge representation with higher-order correction

    Science.gov (United States)

    Lyu, Pengfei; Ando, Makoto

    2017-09-01

    The modified edge representation is one of the equivalent edge currents approximation methods for calculating the physical optics surface radiation integrals in diffraction analysis. The Stokes' theorem is used in the derivation of the modified edge representation from the physical optics for the planar scatterer case, which implies that the surface integral is rigorously reduced into the line integral of the modified edge representation equivalent edge currents, defined in terms of the local shape of the edge. On the contrary, for curved surfaces, the results of radiation integrals depend upon the global shape of the scatterer. The physical optics surface integral consists of two components, from the inner stationary phase point and the edge. The modified edge representation is defined independently from the orientation of the actual edge, and therefore, it could be available not only at the edge but also at the arbitrary points on the scatterer except the stationary phase point where the modified edge representation equivalent edge currents becomes infinite. If stationary phase point exists inside the illuminated region, the physical optics surface integration is reduced into two kinds of the modified edge representation line integrations, along the edge and infinitesimally small integration around the inner stationary phase point, the former and the latter give the diffraction and reflection components, respectively. The accuracy of the latter has been discussed for the curved surfaces and published. This paper focuses on the errors of the former and discusses its correction. It has been numerically observed that the modified edge representation works well for the physical optics diffraction in flat and concave surfaces; errors appear especially for the observer near the reflection shadow boundary if the frequency is low for the convex scatterer. This paper gives the explicit expression of the higher-order correction for the modified edge representation.

  15. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    KAUST Repository

    Ajami, H.

    2014-12-12

    One of the main challenges in the application of coupled or integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth-to-water table (DTWT) distributions. One approach to reducing uncertainty in model initialization is to run the model recursively using either a single year or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of the spin-up procedure by using a combination of model simulations and an empirical DTWT function. The methodology is examined across two distinct catchments located in a temperate region of Denmark and a semi-arid region of Australia. Our results illustrate that the hybrid approach reduced the spin-up period required for an integrated groundwater–surface water–land surface model (ParFlow.CLM) by up to 50%. To generalize results to different climate and catchment conditions, we outline a methodology that is applicable to other coupled or integrated modeling frameworks when initialization from an equilibrium state is required.

  16. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    KAUST Repository

    Ajami, H.

    2014-06-26

    One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.

  17. Geostatistical integration and uncertainty in pollutant concentration surface under preferential sampling

    Directory of Open Access Journals (Sweden)

    Laura Grisotto

    2016-04-01

    Full Text Available In this paper the focus is on environmental statistics, with the aim of estimating the concentration surface and related uncertainty of an air pollutant. We used air quality data recorded by a network of monitoring stations within a Bayesian framework to overcome difficulties in accounting for prediction uncertainty and to integrate information provided by deterministic models based on emissions meteorology and chemico-physical characteristics of the atmosphere. Several authors have proposed such integration, but all the proposed approaches rely on representativeness and completeness of existing air pollution monitoring networks. We considered the situation in which the spatial process of interest and the sampling locations are not independent. This is known in the literature as the preferential sampling problem, which if ignored in the analysis, can bias geostatistical inferences. We developed a Bayesian geostatistical model to account for preferential sampling with the main interest in statistical integration and uncertainty. We used PM10 data arising from the air quality network of the Environmental Protection Agency of Lombardy Region (Italy and numerical outputs from the deterministic model. We specified an inhomogeneous Poisson process for the sampling locations intensities and a shared spatial random component model for the dependence between the spatial location of monitors and the pollution surface. We found greater predicted standard deviation differences in areas not properly covered by the air quality network. In conclusion, in this context inferences on prediction uncertainty may be misleading when geostatistical modelling does not take into account preferential sampling.

  18. An algorithm for hyperspectral remote sensing of aerosols: 2. Information content analysis for aerosol parameters and principal components of surface spectra

    Science.gov (United States)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.

    2017-05-01

    This paper describes the second part of a series of investigation to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from the future hyperspectral and geostationary satellite sensors such as Tropospheric Emissions: Monitoring of POllution (TEMPO). The information content in these hyperspectral measurements is analyzed for 6 principal components (PCs) of surface spectra and a total of 14 aerosol parameters that describe the columnar aerosol volume Vtotal, fine-mode aerosol volume fraction, and the size distribution and wavelength-dependent index of refraction in both coarse and fine mode aerosols. Forward simulations of atmospheric radiative transfer are conducted for 5 surface types (green vegetation, bare soil, rangeland, concrete and mixed surface case) and a wide range of aerosol mixtures. It is shown that the PCs of surface spectra in the atmospheric window channel could be derived from the top-of-the-atmosphere reflectance in the conditions of low aerosol optical depth (AOD ≤ 0.2 at 550 nm), with a relative error of 1%. With degree freedom for signal analysis and the sequential forward selection method, the common bands for different aerosol mixture types and surface types can be selected for aerosol retrieval. The first 20% of our selected bands accounts for more than 90% of information content for aerosols, and only 4 PCs are needed to reconstruct surface reflectance. However, the information content in these common bands from each TEMPO individual observation is insufficient for the simultaneous retrieval of surface's PC weight coefficients and multiple aerosol parameters (other than Vtotal). In contrast, with multiple observations for the same location from TEMPO in multiple consecutive days, 1-3 additional aerosol parameters could be retrieved. Consequently, a self-adjustable aerosol retrieval algorithm to account for surface types, AOD conditions, and multiple-consecutive observations is recommended to derive

  19. Integration of structure-from-motion and symmetry during surface perception.

    Science.gov (United States)

    Treder, Matthias S; Meulenbroek, Ruud G J

    2010-04-14

    Sinusoidal motion of elements in a random-dot pattern can elicit a striking percept of a rotating volume, a phenomenon known as structure-from-motion (SFM). We demonstrate that if the dots defining the volume are 2D mirror-symmetric, novel 3D interpretations arise. In addition to the classical rotating cylinder, one can perceive mirror-symmetric, flexible surfaces bending along the path of movement. In three experiments, we measured the perceptual durations of the different interpretations in a voluntary control task. The results suggest that motion signals and symmetry signals are integrated during surface interpolation. Furthermore, the competition between the rotating cylinder percept and the symmetric surfaces percept is resolved at the level of surface perception rather than at the level of individual stimulus elements. Concluding, structure-from-motion is an interactive process that incorporates not only motion cues but also form cues. The neurofunctional implication of this is that surface interpolation is not fully completed in its designated neural "engine," MT/V5, but rather in a higher tier area such as LOC, which receives input from MT/V5 and which is also involved in symmetry detection.

  20. High-pressure coolant effect on the surface integrity of machining titanium alloy Ti-6Al-4V: a review

    Science.gov (United States)

    Liu, Wentao; Liu, Zhanqiang

    2018-03-01

    Machinability improvement of titanium alloy Ti-6Al-4V is a challenging work in academic and industrial applications owing to its low thermal conductivity, low elasticity modulus and high chemical affinity at high temperatures. Surface integrity of titanium alloys Ti-6Al-4V is prominent in estimating the quality of machined components. The surface topography (surface defects and surface roughness) and the residual stress induced by machining Ti-6Al-4V occupy pivotal roles for the sustainability of Ti-6Al-4V components. High-pressure coolant (HPC) is a potential choice in meeting the requirements for the manufacture and application of Ti-6Al-4V. This paper reviews the progress towards the improvements of Ti-6Al4V surface integrity under HPC. Various researches of surface integrity characteristics have been reported. In particularly, surface roughness, surface defects, residual stress as well as work hardening are investigated in order to evaluate the machined surface qualities. Several coolant parameters (including coolant type, coolant pressure and the injection position) deserve investigating to provide the guidance for a satisfied machined surface. The review also provides a clear roadmap for applications of HPC in machining Ti-6Al4V. Experimental studies and analysis are reviewed to better understand the surface integrity under HPC machining process. A distinct discussion has been presented regarding the limitations and highlights of the prospective for machining Ti-6Al4V under HPC.

  1. An Efficient Vector-Raster Overlay Algorithm for High-Accuracy and High-Efficiency Surface Area Calculations of Irregularly Shaped Land Use Patches

    Directory of Open Access Journals (Sweden)

    Peng Xie

    2017-05-01

    Full Text Available The Earth’s surface is uneven, and conventional area calculation methods are based on the assumption that the projection plane area can be obtained without considering the actual undulation of the Earth’s surface and by simplifying the Earth’s shape to be a standard ellipsoid. However, the true surface area is important for investigating and evaluating land resources. In this study, the authors propose a new method based on an efficient vector-raster overlay algorithm (VROA-based method to calculate the surface areas of irregularly shaped land use patches. In this method, a surface area raster file is first generated based on the raster-based digital elevation model (raster-based DEM. Then, a vector-raster overlay algorithm (VROA is used that considers the precise clipping of raster cells using the vector polygon boundary. Xiantao City, Luotian County, and the Shennongjia Forestry District, which are representative of a plain landform, a hilly topography, and a mountain landscape, respectively, are selected to calculate the surface area. Compared with a traditional method based on triangulated irregular networks (TIN-based method, our method significantly reduces the processing time. In addition, our method effectively improves the accuracy compared with another traditional method based on raster-based DEM (raster-based method. Therefore, the method satisfies the requirements of large-scale engineering applications.

  2. Calibration of an integrated land surface process and radiobrightness (LSP/R) model during summertime

    Science.gov (United States)

    Judge, Jasmeet; England, Anthony W.; Metcalfe, John R.; McNichol, David; Goodison, Barry E.

    2008-01-01

    In this study, a soil vegetation and atmosphere transfer (SVAT) model was linked with a microwave emission model to simulate microwave signatures for different terrain during summertime, when the energy and moisture fluxes at the land surface are strong. The integrated model, land surface process/radiobrightness (LSP/R), was forced with weather and initial conditions observed during a field experiment. It simulated the fluxes and brightness temperatures for bare soil and brome grass in the Northern Great Plains. The model estimates of soil temperature and moisture profiles and terrain brightness temperatures were compared with the observed values. Overall, the LSP model provides realistic estimates of soil moisture and temperature profiles to be used with a microwave model. The maximum mean differences and standard deviations between the modeled and the observed temperatures (canopy and soil) were 2.6 K and 6.8 K, respectively; those for the volumetric soil moisture were 0.9% and 1.5%, respectively. Brightness temperatures at 19 GHz matched well with the observations for bare soil, when a rough surface model was incorporated indicating reduced dielectric sensitivity to soil moisture by surface roughness. The brightness temperatures of the brome grass matched well with the observations indicating that a simple emission model was sufficient to simulate accurate brightness temperatures for grass typical of that region and surface roughness was not a significant issue for grass-covered soil at 19 GHz. Such integrated SVAT-microwave models allow for direct assimilation of microwave observations and can also be used to understand sensitivity of microwave signatures to changes in weather forcings and soil conditions for different terrain types.

  3. Integration of CubeSat Systems with Europa Surface Exploration Missions

    Science.gov (United States)

    Erdoǧan, Enes; Inalhan, Gokhan; Kemal Üre, Nazım

    2016-07-01

    Recent studies show that there is a high probability that a liquid ocean exists under thick icy surface of Jupiter's Moon Europa. The findings also show that Europa has features that are similar to Earth, such as geological activities. As a result of these studies, Europa has promising environment of being habitable and currently there are many missions in both planning and execution level that target Europa. However, these missions usually involve extremely high budgets over extended periods of time. The objective of this talk is to argue that the mission costs can be reduced significantly by integrating CubeSat systems within Europa exploration missions. In particular, we introduce an integrated CubeSat-micro probe system, which can be used for measuring the size and depth of the hypothetical liquid ocean under the icy surface of Europa. The systems consist of an entry module that houses a CubeSat combined with driller measurement probes. Driller measurement probes deploy before the system hits the surface and penetrate the surface layers of Europa. Moreover, a micro laser probe could be used to examine the layers. This process enables investigation of the properties of the icy layer and the environment beneath the surface. Through examination of different scenarios and cost analysis of the components, we show that the proposed CubeSat systems has a significant potential to reduce the cost of the overall mission. Both subsystem requirements and launch prices of CubeSats are dramatically cheaper than currently used satellites. In addition, multiple CubeSats may be used to dominate wider area in space and they are expandable in face of potential failures. In this talk we discuss both the mission design and cost reduction aspects.

  4. Automatic monitoring of ecosystem structure and functions using integrated low-cost near surface sensors

    Science.gov (United States)

    Kim, J.; Ryu, Y.; Jiang, C.; Hwang, Y.

    2016-12-01

    Near surface sensors are able to acquire more reliable and detailed information with higher temporal resolution than satellite observations. Conventional near surface sensors usually work individually, and thus they require considerable manpower from data collection through information extraction and sharing. Recent advances of Internet of Things (IoT) provides unprecedented opportunities to integrate various low-cost sensors as an intelligent near surface observation system for monitoring ecosystem structure and functions. In this study, we developed a Smart Surface Sensing System (4S), which can automatically collect, transfer, process and analyze data, and then publish time series results on public-available website. The system is composed of micro-computer Raspberry pi, micro-controller Arduino, multi-spectral spectrometers made from Light Emitting Diode (LED), visible and near infrared cameras, and Internet module. All components are connected with each other and Raspberry pi intelligently controls the automatic data production chain. We did intensive tests and calibrations in-lab. Then, we conducted in-situ observations at a rice paddy field and a deciduous broadleaf forest. During the whole growth season, 4S obtained landscape images, spectral reflectance in red, green, blue, and near infrared, normalized difference vegetation index (NDVI), fraction of photosynthetically active radiation (fPAR), and leaf area index (LAI) continuously. Also We compared 4S data with other independent measurements. NDVI obtained from 4S agreed well with Jaz hyperspectrometer at both diurnal and seasonal scales (R2 = 0.92, RMSE = 0.059), and 4S derived fPAR and LAI were comparable to LAI-2200 and destructive measurements in both magnitude and seasonal trajectory. We believe that the integrated low-cost near surface sensor could help research community monitoring ecosystem structure and functions closer and easier through a network system.

  5. Rotary ultrasonic elliptical machining for side milling of CFRP: tool performance and surface integrity.

    Science.gov (United States)

    Geng, Daxi; Zhang, Deyuan; Xu, Yonggang; He, Fengtao; Liu, Dapeng; Duan, Zuoheng

    2015-05-01

    The rotary ultrasonic elliptical machining (RUEM) has been recognized as a new effective process to machining circular holes on CFRP materials. In CFRP face machining, the application of grinding tools is restricted for the tool clogging and the machined surface integrity. In this paper, we proposed a novel approach to extend the RUEM process to side milling of CFRP for the first time, which kept the effect of elliptical vibration in RUEM. The experiment apparatus was developed, and the preliminary experiments were designed and conducted, with comparison to conventional grinding (CG). The experimental results showed that when the elliptical vibration was applied in RUEM, a superior cutting process can be obtained compared with that in CG, including providing reduced cutting forces (2-43% decrement), an extended tool life (1.98 times), and improved surface integrity due to the intermittent material removal mechanism and the excellent chip removal conditions achieved in RUEM. It was concluded that the RUEM process is suitable to mill flat surface on CFRP composites. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology

    Science.gov (United States)

    Allen, P. A.; Wells, D. N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  7. Contributions to understanding the high speed machining effects on aeronautic part surface integrity

    Science.gov (United States)

    Jomaa, Walid

    To remain competitive, the aeronautic industry has increasing requirements for mechanical components and parts with high functional performance and longer in-service life. The improvement of the in-service life of components can be achieved by mastering and optimizing the surface integrity of the manufactured parts. Thus, the present study attempted to investigate, experimentally and theoretically, the tool/work material interactions on part surface integrity during the machining of aluminium alloys and hardened materials (low alloy steels) using orthogonal machining tests data. The studied materials are two aluminum alloys (6061-T6 and 7075-T651) and AISI 4340 steel. The AISI 4340 steel was machined after been induction heat treated to 58-60 HRC. These materials were selected in an attempt to provide a comprehensive study for the machining of metals with different behaviours (ductile and hard material). The proposed approach is built on three steps. First, we proposed a design of experiment (DOE) to analyse, experimentally, the chip formation and the resulting surface integrity during the high speed machining under dry condition. The orthogonal cutting mode, adopted in these experiments, allowed to explore, theoretically, the effects of technological (cutting speed and feed) and physical (cutting forces, temperature, shear angle, friction angle, and length Contact tool/chip) parameters on the chip formation mechanisms and the machined surface characteristics (residual stress, plastic deformation, phase transformation, etc.). The cutting conditions were chosen while maintaining a central composite design (CCD) with two factors (cutting speed and feed per revolution). For the aluminum 7075-T651, the results showed that the formation of BUE and the interaction between the tool edge and the iron-rich intermetallic particles are the main causes of the machined surface damage. The BUE formation increases with the cutting feed while the increase of the cutting speed

  8. Comparison of Satellite Reflectance Algorithms for Estimating Phycocyanin Values and Cyanobacterial Total Biovolume in a Temperate Reservoir Using Coincident Hyperspectral Aircraft Imagery and Dense Coincident Surface Observations

    Directory of Open Access Journals (Sweden)

    Richard Beck

    2017-05-01

    Full Text Available We analyzed 27 established and new simple and therefore perhaps portable satellite phycocyanin pigment reflectance algorithms for estimating cyanobacterial values in a temperate 8.9 km2 reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense coincident water surface observations collected from 44 sites within 1 h of image acquisition. The algorithms were adapted to real Compact Airborne Spectrographic Imager (CASI, synthetic WorldView-2, Sentinel-2, Landsat-8, MODIS and Sentinel-3/MERIS/OLCI imagery resulting in 184 variants and corresponding image products. Image products were compared to the cyanobacterial coincident surface observation measurements to identify groups of promising algorithms for operational algal bloom monitoring. Several of the algorithms were found useful for estimating phycocyanin values with each sensor type except MODIS in this small lake. In situ phycocyanin measurements correlated strongly (r2 = 0.757 with cyanobacterial sum of total biovolume (CSTB allowing us to estimate both phycocyanin values and CSTB for all of the satellites considered except MODIS in this situation.

  9. DEMON-type algorithms for determination of hydro-acoustic signatures of surface ships and of divers

    Science.gov (United States)

    Slamnoiu, G.; Radu, O.; Rosca, V.; Pascu, C.; Damian, R.; Surdu, G.; Curca, E.; Radulescu, A.

    2016-08-01

    With the project “System for detection, localization, tracking and identification of risk factors for strategic importance in littoral areas”, developed in the National Programme II, the members of the research consortium intend to develop a functional model for a hydroacoustic passive subsystem for determination of acoustic signatures of targets such as fast boats and autonomous divers. This paper presents some of the results obtained in the area of hydroacoustic signal processing by using DEMON-type algorithms (Detection of Envelope Modulation On Noise). For evaluation of the performance of various algorithm variations we have used both audio recordings of the underwater noise generated by ships and divers in real situations and also simulated noises. We have analysed the results of processing these signals using four DEMON algorithm structures as presented in the reference literature and a fifth DEMON algorithm structure proposed by the authors of this paper. The algorithm proposed by the authors generates similar results to those obtained by applying the traditional algorithms but requires less computing resources than those and at the same time it has proven to be more resilient to random noise influence.

  10. Magnetic field integral equation analysis of surface plasmon scattering by rectangular dielectric channel discontinuities.

    Science.gov (United States)

    Chremmos, Ioannis

    2010-01-01

    The scattering of a surface plasmon polariton (SPP) by a rectangular dielectric channel discontinuity is analyzed through a rigorous magnetic field integral equation method. The scattering phenomenon is formulated by means of the magnetic-type scalar integral equation, which is subsequently treated through an entire-domain Galerkin method of moments (MoM), based on a Fourier-series plane wave expansion of the magnetic field inside the discontinuity. The use of Green's function Fourier transform allows all integrations over the area and along the boundary of the discontinuity to be performed analytically, resulting in a MoM matrix with entries that are expressed as spectral integrals of closed-form expressions. Complex analysis techniques, such as Cauchy's residue theorem and the saddle-point method, are applied to obtain the amplitudes of the transmitted and reflected SPP modes and the radiated field pattern. Through numerical results, we examine the wavelength selectivity of transmission and reflection against the channel dimensions as well as the sensitivity to changes in the refractive index of the discontinuity, which is useful for sensing applications.

  11. Effect of surface treatment on marginal integrity of amalgam restorations (in vitro study).

    Science.gov (United States)

    Kamel, F M

    1995-07-01

    A total of 80 freshly extracted human molars, free from caries, cracks & decalcifications, were used in this study. Conservative class I cavities were prepared in the occlusal surface. Two types of amalgam alloys were used, high copper (Dispersalloy) & conventional (Velvalloy). The prepared cavities were classified into 5 groups, 16 each carve (C), carve & polish (CP), precarve burnish (BC), past-carve (CB) & pre post carve burnish (BCB). The specimens were thermally stressed using the stress fatigue device. The marginal integrity of the amalgam enamel interface were evaluated using SEM, for the four marginal quantities: 1--excellent margin, 2--open margins, 3--enamel fracture, and 4--amalgam fracture. The results of this study revealed that higher copper amalgam demonstrated superior marginal integrity than the conventional one. The pre-post carve burnish group showed the highest percentage of excellent margin than the other groups.

  12. Total luminous flux measurement for flexible surface sources with an integrating sphere photometer

    International Nuclear Information System (INIS)

    Yu, Hsueh-Ling; Liu, Wen-Chun

    2014-01-01

    Applying an integrating sphere photometer for total luminous flux measurement is a widely used method. However, the measurement accuracy depends on the spatial uniformity of the integrating sphere, especially when the test sample has a different light distribution from that of the standard source. Therefore, spatial correction is needed to eliminate the effect caused by non-uniformity. To reduce the inconvenience of spatial correction but retain the measurement accuracy, a new type of working standard is designed for flexible and curved surface sources. Applying this new type standard source, the measurement deviation due to different orientations is reduced by an order of magnitude compared with using a naked incandescent lamp as the standard source. (paper)

  13. Real Time Metrics and Analysis of Integrated Arrival, Departure, and Surface Operations

    Science.gov (United States)

    Sharma, Shivanjli; Fergus, John

    2017-01-01

    To address the Integrated Arrival, Departure, and Surface (IADS) challenge, NASA is developing and demonstrating trajectory-based departure automation under a collaborative effort with the FAA and industry known Airspace Technology Demonstration 2 (ATD-2). ATD-2 builds upon and integrates previous NASA research capabilities that include the Spot and Runway Departure Advisor (SARDA), the Precision Departure Release Capability (PDRC), and the Terminal Sequencing and Spacing (TSAS) capability. As trajectory-based departure scheduling and collaborative decision making tools are introduced in order to reduce delays and uncertainties in taxi and climb operations across the National Airspace System, users of the tools across a number of roles benefit from a real time system that enables common situational awareness. A real time dashboard was developed to inform and present users notifications and integrated information regarding airport surface operations. The dashboard is a supplement to capabilities and tools that incorporate arrival, departure, and surface air-traffic operations concepts in a NextGen environment. In addition to shared situational awareness, the dashboard offers the ability to compute real time metrics and analysis to inform users about capacity, predictability, and efficiency of the system as a whole. This paper describes the architecture of the real time dashboard as well as an initial proposed set of metrics. The potential impact of the real time dashboard is studied at the site identified for initial deployment and demonstration in 2017: Charlotte-Douglas International Airport (CLT). The architecture of implementing such a tool as well as potential uses are presented for operations at CLT. Metrics computed in real time illustrate the opportunity to provide common situational awareness and inform users of system delay, throughput, taxi time, and airport capacity. In addition, common awareness of delays and the impact of takeoff and departure

  14. Softlithographic partial integration of surface-active nanoparticles in a PDMS matrix for microfluidic biodevices

    Energy Technology Data Exchange (ETDEWEB)

    Demming, Stefanie; Buettgenbach, Stephanus [Institute for Microtechnology (IMT), Technische Universitaet Braunschweig, Alte Salzdahlumer Strasse 203, 38124 Braunschweig (Germany); Hahn, Anne; Barcikowski, Stephan [Nanotechnology Department, Laser Zentrum Hannover e.V. (LZH), Hollerithallee 8, 30419 Hannover (Germany); Edlich, Astrid; Franco-Lara, Ezequiel; Krull, Rainer [Institute of Biochemical Engineering (IBVT), Technische Universitaet Braunschweig, Gaussstrasse 17, 38106 Braunschweig (Germany)

    2010-04-15

    The mergence of microfluidics and nanocomposite materials and their in situ structuring leads to a higher integration level within microsystems technology. Nanoparticles (Cu and Ag) produced via laser radiation were suspended in Poly(dimethylsiloxane) to permanently modify surface material. A microstructuring process was implemented which allows the incorporation of these nanomaterials globally or partially at defined locations within a microbioreactor (MBR) for the determination of their antiseptic and toxic effects on the growth of biomass. Partially structured PDMS with nanoparticle-PDMS composite. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  15. Surface integrity evaluation of brass CW614N after impact of acoustically\

    Czech Academy of Sciences Publication Activity Database

    Lehocká, D.; Klich, Jiří; Foldyna, Josef; Hloch, Sergej; Hvizdoš, P.; Fides, M.; Botko, F.; Cárach, J.

    2016-01-01

    Roč. 149, č. 149 (2016), s. 236-244 E-ISSN 1877-7058. [International Conference on Manufacturing Engineering and Materials, ICMEM 2016. Nový Smokovec, 06.06.2016-10.06.2016] R&D Projects: GA MŠk(CZ) LO1406; GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : pulsating water jet * surface integrity * mass material removal * brass * nanoindentation Subject RIV: JQ - Machines ; Tools http://www.sciencedirect.com/science/article/pii/S1877705816311705

  16. J-Integral Calculation by Finite Element Processing of Measured Full-Field Surface Displacements

    OpenAIRE

    Barhli, S. M.; Mostafavi, Mahmoud; Cinar, Ahmet; Hollis, David; Marrow, James

    2017-01-01

    © 2017 The Author(s)A novel method has been developed based on the conjoint use of digital image correlation to measure full field displacements and finite element simulations to extract the strain energy release rate of surface cracks. In this approach, a finite element model with imported full-field displacements measured by DIC is solved and the J-integral is calculated, without knowledge of the specimen geometry and applied loads. This can be done even in a specimen that develops crack ti...

  17. Surface plasmon polariton band gap structures: implications to integrated plasmonic circuits

    DEFF Research Database (Denmark)

    Bozhevolnyi, S. I.; Volkov, V. S.; Østergaard, John Erland

    2001-01-01

    Conventional photonic band gap (PBG) structures are composed of regions with periodic modulation of refractive index that do not allow the propagation of electromagnetic waves in a certain interval of wavelengths, i.e., that exhibit the PBG effect. The PBG effect is essentially an interference...... phenomenon related to strong multiple scattering of light in periodic media. The interest to the PBG structures has dramatically risen since the possibility of efficient waveguiding around a sharp corner of a line defect in the PBG structure has been pointed out. Given the perspective of integrating various...... PBG-based components within a few hundred micrometers, we realized that other two-dimensional waves, e.g., surface plasmon polaritons (SPPs), might be employed for the same purpose. The SPP band gap (SPPBG) has been observed for the textured silver surfaces by performing angular measurements...

  18. Microstructural Analysis of Machined Surface Integrity in Drilling a Titanium Alloy

    Science.gov (United States)

    Varote, Nilesh; Joshi, Suhas S.

    2017-09-01

    Severe mechanical deformation coupled with high heat generation prevails during drilling. Establishing correlations between microstructure and surface integrity has always been a challenge, which is the main focus of this work. High-speed drilling experiments were performed by varying speed, feed rate and machining environments (dry and wet). The changes in microhardness, residual stresses and microstructure on the drilled surfaces were analyzed. A dominant mechanical deformation is found to lower grain size and increase grain boundary misorientation angle, whereas under a dominant thermal deformation higher grain size and lower grain boundary misorientation angle was evident. In dry drilling, a combined effect of temperature and mechanical deformation, the deformed and then recrystallized grains are observed to have orientation. The drilling parameters that increase strain rate aggravate machining-affected zone, whereas heat accumulation increases heat-affected zone, only in dry drilling. An empirical model for predicting grain size has been developed.

  19. A Calderón multiplicative preconditioner for coupled surface-volume electric field integral equations

    KAUST Repository

    Bagci, Hakan

    2010-08-01

    A well-conditioned coupled set of surface (S) and volume (V) electric field integral equations (S-EFIE and V-EFIE) for analyzing wave interactions with densely discretized composite structures is presented. Whereas the V-EFIE operator is well-posed even when applied to densely discretized volumes, a classically formulated S-EFIE operator is ill-posed when applied to densely discretized surfaces. This renders the discretized coupled S-EFIE and V-EFIE system ill-conditioned, and its iterative solution inefficient or even impossible. The proposed scheme regularizes the coupled set of S-EFIE and V-EFIE using a Calderón multiplicative preconditioner (CMP)-based technique. The resulting scheme enables the efficient analysis of electromagnetic interactions with composite structures containing fine/subwavelength geometric features. Numerical examples demonstrate the efficiency of the proposed scheme. © 2006 IEEE.

  20. Prediction of Cancer Proteins by Integrating Protein Interaction, Domain Frequency, and Domain Interaction Data Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Chien-Hung Huang

    2015-01-01

    Full Text Available Many proteins are known to be associated with cancer diseases. It is quite often that their precise functional role in disease pathogenesis remains unclear. A strategy to gain a better understanding of the function of these proteins is to make use of a combination of different aspects of proteomics data types. In this study, we extended Aragues’s method by employing the protein-protein interaction (PPI data, domain-domain interaction (DDI data, weighted domain frequency score (DFS, and cancer linker degree (CLD data to predict cancer proteins. Performances were benchmarked based on three kinds of experiments as follows: (I using individual algorithm, (II combining algorithms, and (III combining the same classification types of algorithms. When compared with Aragues’s method, our proposed methods, that is, machine learning algorithm and voting with the majority, are significantly superior in all seven performance measures. We demonstrated the accuracy of the proposed method on two independent datasets. The best algorithm can achieve a hit ratio of 89.4% and 72.8% for lung cancer dataset and lung cancer microarray study, respectively. It is anticipated that the current research could help understand disease mechanisms and diagnosis.

  1. Effect of time sequences in scanning algorithms on the surface temperature during corneal laser surgery with high-repetition-rate excimer laser.

    Science.gov (United States)

    Mrochen, Michael; Schelling, Urs; Wuellner, Christian; Donitzky, Christof

    2009-04-01

    To investigate the influence of temporal and spatial spot sequences on the ocular surface temperature increase during corneal laser surgery with a high-repetition-rate excimer laser. Institute for Refractive and Ophthalmic Surgery, Zurich, Switzerland, and WaveLight AG, Erlangen, Germany. An argon-fluoride excimer laser system working at a repetition rate of 1050 Hz was used to photoablate bovine corneas with various myopic, hyperopic, and phototherapeutic ablation profiles. The temporal distribution of ablation profiles was modified by 4 spot sequences: line, circumferential, random, and an optimized scan algorithm. The increase in ocular surface temperature was measured using an infrared camera. The maximum and mean ocular surface temperature increases depended primarily on the spatial and temporal distribution of the spots during photoablation and the amount of refractive correction. The highest temperature increases were with the line and circumferential scan sequences. Significant lower temperature increases were found with the optimized and random scan algorithms. High-repetition-rate excimer laser systems require spot sequences with optimized temporal and spatial spot distribution to minimize the increase in ocular surface temperature. An ocular surface temperature increase will always occur depending on the amount of refractive correction, the type of ablation profile, the radiant exposure, and the repetition rate of the laser system.

  2. An integrated evaluation of land surface energy fluxes over China in seven reanalysis/modeling products

    Science.gov (United States)

    Li, Hongyu; Fu, Congbin; Guo, Weidong

    2017-08-01

    An integrated evaluation of monthly mean land surface energy fluxes over China in seven reanalysis and land model products during the period 1979-2015 is conducted. Observations from seven field sites are used to evaluate these flux products, including four reanalysis data sets and three produced by off-line land surface models. In general, the expected seasonal variations and spatial patterns in major climatic regimes are well reproduced by all reanalysis and modeling products. However, large differences among the four reanalysis products are found, while the three off-line land surface modeling products correlate well with each other. Looking at the Bowen ratio, it is found that the off-line land surface models convert a larger fraction of surface available energy into sensible heat flux compared to the reanalysis products in all climatic regimes. There are three centers of high interannual variability in sensible heat located in West China, Northeast China, and the eastern Inner Mongolia, respectively. In addition, the sensible heat flux agrees better with observations at grassland sites than at forest sites, while the latent heat flux and net radiation are significantly overestimated at forest sites in all the flux products. Besides, mean square errors of the fluxes are decomposed into biases, correlations, and differences in standard deviation. Finally, based on a ranking system adopted to quantitatively evaluate the performance of each data set, it is found that the surface energy fluxes in ERA-Interim and JRA-25 agree well with observations and the ensemble mean of all these products remains reasonably realistic as well.

  3. Four chemical methods of porcelain conditioning and their influence over bond strength and surface integrity

    Science.gov (United States)

    Stella, João Paulo Fragomeni; Oliveira, Andrea Becker; Nojima, Lincoln Issamu; Marquezan, Mariana

    2015-01-01

    OBJECTIVE: To assess four different chemical surface conditioning methods for ceramic material before bracket bonding, and their impact on shear bond strength and surface integrity at debonding. METHODS: Four experimental groups (n = 13) were set up according to the ceramic conditioning method: G1 = 37% phosphoric acid etching followed by silane application; G2 = 37% liquid phosphoric acid etching, no rinsing, followed by silane application; G3 = 10% hydrofluoric acid etching alone; and G4 = 10% hydrofluoric acid etching followed by silane application. After surface conditioning, metal brackets were bonded to porcelain by means of the Transbond XP system (3M Unitek). Samples were submitted to shear bond strength tests in a universal testing machine and the surfaces were later assessed with a microscope under 8 X magnification. ANOVA/Tukey tests were performed to establish the difference between groups (α= 5%). RESULTS: The highest shear bond strength values were found in groups G3 and G4 (22.01 ± 2.15 MPa and 22.83 ± 3.32 Mpa, respectively), followed by G1 (16.42 ± 3.61 MPa) and G2 (9.29 ± 1.95 MPa). As regards surface evaluation after bracket debonding, the use of liquid phosphoric acid followed by silane application (G2) produced the least damage to porcelain. When hydrofluoric acid and silane were applied, the risk of ceramic fracture increased. CONCLUSIONS: Acceptable levels of bond strength for clinical use were reached by all methods tested; however, liquid phosphoric acid etching followed by silane application (G2) resulted in the least damage to the ceramic surface. PMID:26352845

  4. Integrated Modeling of Groundwater and Surface Water Interactions in a Manmade Wetland

    Directory of Open Access Journals (Sweden)

    Guobiao Huang Gour-Tsyh Yeh

    2012-01-01

    Full Text Available A manmade pilot wetland in south Florida, the Everglades Nutrient Removal (ENR project, was modeled with a physics-based integrated approach using WASH123D (Yeh et al. 2006. Storm water is routed into the treatment wetland for phosphorus removal by plant and sediment uptake. It overlies a highly permeable surficial groundwater aquifer. Strong surface water and groundwater interactions are a key component of the hydrologic processes. The site has extensive field measurement and monitoring tools that provide point scale and distributed data on surface water levels, groundwater levels, and the physical range of hydraulic parameters and hydrologic fluxes. Previous hydrologic and hydrodynamic modeling studies have treated seepage losses empirically by some simple regression equations and, only surface water flows are modeled in detail. Several years of operational data are available and were used in model historical matching and validation. The validity of a diffusion wave approximation for two-dimensional overland flow (in the region with very flat topography was also tested. The uniqueness of this modeling study is notable for (1 the point scale and distributed comparison of model results with observed data; (2 model parameters based on available field test data; and (3 water flows in the study area include two-dimensional overland flow, hydraulic structures/levees, three-dimensional subsurface flow and one-dimensional canal flow and their interactions. This study demonstrates the need and the utility of a physics-based modeling approach for strong surface water and groundwater interactions.

  5. Prolonged silicon carbide integrated circuit operation in Venus surface atmospheric conditions

    Directory of Open Access Journals (Sweden)

    Philip G. Neudeck

    2016-12-01

    Full Text Available The prolonged operation of semiconductor integrated circuits (ICs needed for long-duration exploration of the surface of Venus has proven insurmountably challenging to date due to the ∼ 460 °C, ∼ 9.4 MPa caustic environment. Past and planned Venus landers have been limited to a few hours of surface operation, even when IC electronics needed for basic lander operation are protected with heavily cumbersome pressure vessels and cooling measures. Here we demonstrate vastly longer (weeks electrical operation of two silicon carbide (4H-SiC junction field effect transistor (JFET ring oscillator ICs tested with chips directly exposed (no cooling and no protective chip packaging to a high-fidelity physical and chemical reproduction of Venus’ surface atmosphere. This represents more than 100-fold extension of demonstrated Venus environment electronics durability. With further technology maturation, such SiC IC electronics could drastically improve Venus lander designs and mission concepts, fundamentally enabling long-duration enhanced missions to the surface of Venus.

  6. Full Coupling Between the Atmosphere, Surface, and Subsurface for Integrated Hydrologic Simulation

    Science.gov (United States)

    Davison, Jason Hamilton; Hwang, Hyoun-Tae; Sudicky, Edward A.; Mallia, Derek V.; Lin, John C.

    2018-01-01

    An ever increasing community of earth system modelers is incorporating new physical processes into numerical models. This trend is facilitated by advancements in computational resources, improvements in simulation skill, and the desire to build numerical simulators that represent the water cycle with greater fidelity. In this quest to develop a state-of-the-art water cycle model, we coupled HydroGeoSphere (HGS), a 3-D control-volume finite element surface and variably saturated subsurface flow model that includes evapotranspiration processes, to the Weather Research and Forecasting (WRF) Model, a 3-D finite difference nonhydrostatic mesoscale atmospheric model. The two-way coupled model, referred to as HGS-WRF, exchanges the actual evapotranspiration fluxes and soil saturations calculated by HGS to WRF; conversely, the potential evapotranspiration and precipitation fluxes from WRF are passed to HGS. The flexible HGS-WRF coupling method allows for unique meshes used by each model, while maintaining mass and energy conservation between the domains. Furthermore, the HGS-WRF coupling implements a subtime stepping algorithm to minimize computational expense. As a demonstration of HGS-WRF's capabilities, we applied it to the California Basin and found a strong connection between the depth to the groundwater table and the latent heat fluxes across the land surface.

  7. Systematic Integration of Innovation in Process Improvement Projects Using the Enhanced Sigma-TRIZ Algorithm and Its Effective Use by Means of a Knowledge Management Software Platform

    Directory of Open Access Journals (Sweden)

    Mircea FULEA

    2009-01-01

    Full Text Available In an evolving, highly turbulent and uncertain socio-economic environment, organizations must consider strategies of systematic and continuous integration of innovation within their business systems, as a fundamental condition for sustainable development. Adequate methodologies are required in this respect. A mature framework for integrating innovative problem solving approaches within business process improvement methodologies is proposed in this paper. It considers a TRIZ-centred algorithm in the improvement phase of the DMAIC methodology. The new tool is called enhanced sigma-TRIZ. A case study reveals the practical application of the proposed methodology. The integration of enhanced sigma-TRIZ within a knowledge management software platform (KMSP is further described. Specific developments to support processes of knowledge creation, knowledge storage and retrieval, knowledge transfer and knowledge application in a friendly and effective way within the KMSP are also highlighted.

  8. NOAA JPSS Microwave Integrated Retrieval System (MIRS) Advanced Technology Microwave Sounder (ATMS) Precipitation and Surface Products from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains two-dimensional precipitation and surface products from the JPSS Microwave Integrated Retrieval System (MIRS) using sensor data from the...

  9. Machining the Integral Impeller and Blisk of Aero-Engines: A Review of Surface Finishing and Strengthening Technologies

    Science.gov (United States)

    Fu, Youzhi; Gao, Hang; Wang, Xuanping; Guo, Dongming

    2017-05-01

    The integral impeller and blisk of an aero-engine are high performance parts with complex structure and made of difficult-to-cut materials. The blade surfaces of the integral impeller and blisk are functional surfaces for power transmission, and their surface integrity has significant effects on the aerodynamic efficiency and service life of an aero-engine. Thus, it is indispensable to finish and strengthen the blades before use. This paper presents a comprehensive literature review of studies on finishing and strengthening technologies for the impeller and blisk of aero-engines. The review includes independent and integrated finishing and strengthening technologies and discusses advanced rotational abrasive flow machining with back-pressure used for finishing the integral impeller and blisk. A brief assessment of future research problems and directions is also presented.

  10. Self-consistent predictor/corrector algorithms for stable and efficient integration of the time-dependent Kohn-Sham equation

    Science.gov (United States)

    Zhu, Ying; Herbert, John M.

    2018-01-01

    The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.

  11. Model-based surface soil moisture (SSM) retrieval algorithm using multi-temporal RISAT-1 C-band SAR data

    Science.gov (United States)

    Pandey, Dharmendra K.; Maity, Saroj; Bhattacharya, Bimal; Misra, Arundhati

    2016-05-01

    Accurate measurement of surface soil moisture of bare and vegetation covered soil over agricultural field and monitoring the changes in surface soil moisture is vital for estimation for managing and mitigating risk to agricultural crop, which requires information and knowledge to assess risk potential and implement risk reduction strategies and deliver essential responses. The empirical and semi-empirical model-based soil moisture inversion approach developed in the past are either sensor or region specific, vegetation type specific or have limited validity range, and have limited scope to explain physical scattering processes. Hence, there is need for more robust, physical polarimetric radar backscatter model-based retrieval methods, which are sensor and location independent and have wide range of validity over soil properties. In the present study, Integral Equation Model (IEM) and Vector Radiative Transfer (VRT) model were used to simulate averaged backscatter coefficients in various soil moisture (dry, moist and wet soil), soil roughness (smooth to very rough) and crop conditions (low to high vegetation water contents) over selected regions of Gujarat state of India and the results were compared with multi-temporal Radar Imaging Satellite-1 (RISAT-1) C-band Synthetic Aperture Radar (SAR) data in σ°HH and σ°HV polarizations, in sync with on field measured soil and crop conditions. High correlations were observed between RISAT-1 HH and HV with model simulated σ°HH & σ°HV based on field measured soil with the coefficient of determination R2 varying from 0.84 to 0.77 and RMSE varying from 0.94 dB to 2.1 dB for bare soil. Whereas in case of winter wheat crop, coefficient of determination R2 varying from 0.84 to 0.79 and RMSE varying from 0.87 dB to 1.34 dB, corresponding to with vegetation water content values up to 3.4 kg/m2. Artificial Neural Network (ANN) methods were adopted for model-based soil moisture inversion. The training datasets for the NNs were

  12. A branch-and-price algorithm to solve the integrated berth allocation and yard assignment problem in bulk ports

    DEFF Research Database (Denmark)

    Robenek, Tomáš; Umang, Nitish; Bierlaire, Michel

    2014-01-01

    -shaking neighborhood search is presented. The proposed algorithms are tested and validated through numerical experiments based on instances inspired from real bulk port data. The results indicate that the algorithms can be successfully used to solve instances containing up to 40 vessels within reasonable computational......In this research, two crucial optimization problems of berth allocation and yard assignment in the context of bulk ports are studied. We discuss how these problems are interrelated and can be combined and solved as a single large scale optimization problem. More importantly we highlight...... the differences in operations between bulk ports and container terminals which highlights the need to devise specific solutions for bulk ports. The objective is to minimize the total service time of vessels berthing at the port. We propose an exact solution algorithm based on a branch and price framework to solve...

  13. Optimal number and location of heaters in 2-D radiant enclosures composed of specular and diffuse surfaces using micro-genetic algorithm

    International Nuclear Information System (INIS)

    Safavinejad, A.; Mansouri, S.H.; Sakurai, A.; Maruyama, S.

    2009-01-01

    In this study, a combinatorial optimization methodology has been presented for determining the optimal number and location of equally powered heaters over some parts of the boundary, called the heater surface, to satisfy the desired heat flux and temperature profiles over the design surface while keeping the total heaters power constant but floating the number of heaters. In a typical enclosure, candidate locations were numerous for placing the heaters. The optimal number and location could be found by checking among all the possible combinations of heater power ranges and locations on the heater surface. The possibility of checking only a small portion of the total search space was increasingly desirable for finding an overall optimal solution. Micro-genetic algorithm was a candidate method which displayed a significant potential in achieving that task. Micro-genetic algorithm was used to minimize an objective function which was expressed by the sum of square errors between estimated and desired heat fluxes on the design surface. Radiation element method by ray emission model (REM 2 ) was used to calculate the radiative heat flux on the design surface. It enabled us to handle the effects of specular surfaces and blockage radiation due to enclosure geometry. The capabilities of this methodology were demonstrated by finding the optimal number and position of heaters in two irregular enclosures. The effects of refractory surface characteristics (i.e., diffuse and/or specular) on the optimal solution have been studied in detail. The results show that the refractory surface characteristics have profound effects on the optimal number and location of heaters

  14. HESS Opinions "Integration of groundwater and surface water research: an interdisciplinary problem?"

    Science.gov (United States)

    Barthel, R.

    2014-07-01

    Today there is a great consensus that water resource research needs to become more holistic, integrating perspectives of a large variety of disciplines. Groundwater and surface water (hereafter: GW and SW) are typically identified as different compartments of the hydrological cycle and were traditionally often studied and managed separately. However, despite this separation, these respective fields of study are usually not considered to be different disciplines. They are often seen as different specializations of hydrology with a different focus yet similar theory, concepts, and methodology. The present article discusses how this notion may form a substantial obstacle in the further integration of GW and SW research and management. The article focuses on the regional scale (areas of approximately 103 to 106 km2), which is identified as the scale where integration is most greatly needed, but ironically where the least amount of fully integrated research seems to be undertaken. The state of research on integrating GW and SW research is briefly reviewed and the most essential differences between GW hydrology (or hydrogeology, geohydrology) and SW hydrology are presented. Groundwater recharge and baseflow are used as examples to illustrate different perspectives on similar phenomena that can cause severe misunderstandings and errors in the conceptualization of integration schemes. The fact that integration of GW and SW research on the regional scale necessarily must move beyond the hydrological aspects, by collaborating with the social sciences and increasing the interaction between science and society in general, is also discussed. The typical elements of an ideal interdisciplinary workflow are presented and their relevance with respect to the integration of GW and SW is discussed. The overall conclusions are that GW hydrology and SW hydrogeology study rather different objects of interest, using different types of observation, working on different problem settings

  15. High colored dissolved organic matter (CDOM) absorption in surface waters of the central-eastern Arctic Ocean: Implications for biogeochemistry and ocean color algorithms.

    Science.gov (United States)

    Gonçalves-Araujo, Rafael; Rabe, Benjamin; Peeken, Ilka; Bracher, Astrid

    2018-01-01

    As consequences of global warming sea-ice shrinking, permafrost thawing and changes in fresh water and terrestrial material export have already been reported in the Arctic environment. These processes impact light penetration and primary production. To reach a better understanding of the current status and to provide accurate forecasts Arctic biogeochemical and physical parameters need to be extensively monitored. In this sense, bio-optical properties are useful to be measured due to the applicability of optical instrumentation to autonomous platforms, including satellites. This study characterizes the non-water absorbers and their coupling to hydrographic conditions in the poorly sampled surface waters of the central and eastern Arctic Ocean. Over the entire sampled area colored dissolved organic matter (CDOM) dominates the light absorption in surface waters. The distribution of CDOM, phytoplankton and non-algal particles absorption reproduces the hydrographic variability in this region of the Arctic Ocean which suggests a subdivision into five major bio-optical provinces: Laptev Sea Shelf, Laptev Sea, Central Arctic/Transpolar Drift, Beaufort Gyre and Eurasian/Nansen Basin. Evaluating ocean color algorithms commonly applied in the Arctic Ocean shows that global and regionally tuned empirical algorithms provide poor chlorophyll-a (Chl-a) estimates. The semi-analytical algorithms Generalized Inherent Optical Property model (GIOP) and Garver-Siegel-Maritorena (GSM), on the other hand, provide robust estimates of Chl-a and absorption of colored matter. Applying GSM with modifications proposed for the western Arctic Ocean produced reliable information on the absorption by colored matter, and specifically by CDOM. These findings highlight that only semi-analytical ocean color algorithms are able to identify with low uncertainty the distribution of the different optical water constituents in these high CDOM absorbing waters. In addition, a clustering of the Arctic Ocean

  16. Adaptive Sliding Mode Control Method Based on Nonlinear Integral Sliding Surface for Agricultural Vehicle Steering Control

    Directory of Open Access Journals (Sweden)

    Taochang Li

    2014-01-01

    Full Text Available Automatic steering control is the key factor and essential condition in the realization of the automatic navigation control of agricultural vehicles. In order to get satisfactory steering control performance, an adaptive sliding mode control method based on a nonlinear integral sliding surface is proposed in this paper for agricultural vehicle steering control. First, the vehicle steering system is modeled as a second-order mathematic model; the system uncertainties and unmodeled dynamics as well as the external disturbances are regarded as the equivalent disturbances satisfying a certain boundary. Second, a transient process of the desired system response is constructed in each navigation control period. Based on the transient process, a nonlinear integral sliding surface is designed. Then the corresponding sliding mode control law is proposed to guarantee the fast response characteristics with no overshoot in the closed-loop steering control system. Meanwhile, the switching gain of sliding mode control is adaptively adjusted to alleviate the control input chattering by using the fuzzy control method. Finally, the effectiveness and the superiority of the proposed method are verified by a series of simulation and actual steering control experiments.

  17. Sulfatide Preserves Insulin Crystals Not by Being Integrated in the Lattice but by Stabilizing Their Surface

    Directory of Open Access Journals (Sweden)

    Karsten Buschard

    2016-01-01

    Full Text Available Background. Sulfatide is known to chaperone insulin crystallization within the pancreatic beta cell, but it is not known if this results from sulfatide being integrated inside the crystal structure or by binding the surface of the crystal. With this study, we aimed to characterize the molecular mechanisms underlying the integral role for sulfatide in stabilizing insulin crystals prior to exocytosis. Methods. We cocrystallized human insulin in the presence of sulfatide and solved the structure by molecular replacement. Results. The crystal structure of insulin crystallized in the presence of sulfatide does not reveal ordered occupancy representing sulfatide in the crystal lattice, suggesting that sulfatide does not permeate the crystal lattice but exerts its stabilizing effect by alternative interactions such as on the external surface of insulin crystals. Conclusions. Sulfatide is known to stabilize insulin crystals, and we demonstrate here that in beta cells sulfatide is likely coating insulin crystals. However, there is no evidence for sulfatide to be built into the crystal lattice.

  18. Sulfatide Preserves Insulin Crystals Not by Being Integrated in the Lattice but by Stabilizing Their Surface

    Science.gov (United States)

    Buschard, Karsten; Bracey, Austin W.; McElroy, Daniel L.; Magis, Andrew T.; Osterbye, Thomas; Atkinson, Mark A.; Bailey, Kate M.; Posgai, Amanda L.; Ostrov, David A.

    2016-01-01

    Background. Sulfatide is known to chaperone insulin crystallization within the pancreatic beta cell, but it is not known if this results from sulfatide being integrated inside the crystal structure or by binding the surface of the crystal. With this study, we aimed to characterize the molecular mechanisms underlying the integral role for sulfatide in stabilizing insulin crystals prior to exocytosis. Methods. We cocrystallized human insulin in the presence of sulfatide and solved the structure by molecular replacement. Results. The crystal structure of insulin crystallized in the presence of sulfatide does not reveal ordered occupancy representing sulfatide in the crystal lattice, suggesting that sulfatide does not permeate the crystal lattice but exerts its stabilizing effect by alternative interactions such as on the external surface of insulin crystals. Conclusions. Sulfatide is known to stabilize insulin crystals, and we demonstrate here that in beta cells sulfatide is likely coating insulin crystals. However, there is no evidence for sulfatide to be built into the crystal lattice. PMID:26981544

  19. FE Calculations of J-Integrals in a Constrained Elastomeric Disk with Crack Surface Pressure and Isothermal Load

    National Research Council Canada - National Science Library

    Ching, H. K; Liu, C. T; Yen, S. C

    2004-01-01

    .... For the linear analysis, material compressibility was modeled with Poisson's varying form 0.48 to 0.4999. In addition, with the presence of the crack surface pressure, the J-integral was modified by including an additional line integral...

  20. Theory of differential and integral scattering of laser radiation by a dielectric surface taking a defect layer into account

    NARCIS (Netherlands)

    Azarova, VV; Dmitriev, VG; Lokhov, YN; Malitskii, KN

    The differential and integral light scattering by dielectric surfaces is studied theoretically taking a thin nearsurface defect layer into account. The expressions for the intensities of differential and total integral scattering are found by the Green function method. Conditions are found under

  1. A novel algorithm for delineating wetland depressions and mapping surface hydrologic flow pathways using LiDAR data

    Science.gov (United States)

    In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...

  2. Integrated assessment of climate change impact on surface runoff contamination by pesticides.

    Science.gov (United States)

    Gagnon, Patrick; Sheedy, Claudia; Rousseau, Alain N; Bourgeois, Gaétan; Chouinard, Gérald

    2016-07-01

    Pesticide transport by surface runoff depends on climate, agricultural practices, topography, soil characteristics, crop type, and pest phenology. To accurately assess the impact of climate change, these factors must be accounted for in a single framework by integrating their interaction and uncertainty. This article presents the development and application of a framework to assess the impact of climate change on pesticide transport by surface runoff in southern Québec (Canada) for the 1981-2040 period. The crop enemies investigated were: weeds for corn (Zea mays); and for apple orchard (Malus pumila), 3 insect pests (codling moth [Cydia pomonella], plum curculio [Conotrachelus nenuphar], and apple maggot [Rhagoletis pomonella]), 2 diseases (apple scab [Venturia inaequalis], and fire blight [Erwinia amylovora]). A total of 23 climate simulations, 19 sites, and 11 active ingredients were considered. The relationship between climate and phenology was accounted for by bioclimatic models of the Computer Centre for Agricultural Pest Forecasting (CIPRA) software. Exported loads of pesticides were evaluated at the edge-of-field scale using the Pesticide Root Zone Model (PRZM), simulating both hydrology and chemical transport. A stochastic model was developed to account for PRZM parameter uncertainty. Results of this study indicate that for the 2011-2040 period, application dates would be advanced from 3 to 7 days on average with respect to the 1981-2010 period. However, the impact of climate change on maximum daily rainfall during the application window is not statistically significant, mainly due to the high variability of extreme rainfall events. Hence, for the studied sites and crop enemies considered, climate change impact on pesticide transported in surface runoff is not statistically significant throughout the 2011-2040 period. Integr Environ Assess Managem 2016;12:559-571. © Her Majesty the Queen in Right of Canada 2015; Published 2015 SETAC. © Her Majesty the

  3. Identifying Key Issues and Potential Solutions for Integrated Arrival, Departure, Surface Operations by Surveying Stakeholder Preferences

    Science.gov (United States)

    Aponso, Bimal; Coppenbarger, Richard A.; Jung, Yoon; Quon, Leighton; Lohr, Gary; O’Connor, Neil; Engelland, Shawn

    2015-01-01

    NASA's Aeronautics Research Mission Directorate (ARMD) collaborates with the FAA and industry to provide concepts and technologies that enhance the transition to the next-generation air-traffic management system (NextGen). To facilitate this collaboration, ARMD has a series of Airspace Technology Demonstration (ATD) sub-projects that develop, demonstrate, and transitions NASA technologies and concepts for implementation in the National Airspace System (NAS). The second of these sub-projects, ATD-2, is focused on the potential benefits to NAS stakeholders of integrated arrival, departure, surface (IADS) operations. To determine the project objectives and assess the benefits of a potential solution, NASA surveyed NAS stakeholders to understand the existing issues in arrival, departure, and surface operations, and the perceived benefits of better integrating these operations. NASA surveyed a broad cross-section of stakeholders representing the airlines, airports, air-navigation service providers, and industry providers of NAS tools. The survey indicated that improving the predictability of flight times (schedules) could improve efficiency in arrival, departure, and surface operations. Stakeholders also mentioned the need for better strategic and tactical information on traffic constraints as well as better information sharing and a coupled collaborative planning process that allows stakeholders to coordinate IADS operations. To assess the impact of a potential solution, NASA sketched an initial departure scheduling concept and assessed its viability by surveying a select group of stakeholders for a second time. The objective of the departure scheduler was to enable flights to move continuously from gate to cruise with minimal interruption in a busy metroplex airspace environment using strategic and tactical scheduling enhanced by collaborative planning between airlines and service providers. The stakeholders agreed that this departure concept could improve schedule

  4. Calculating all local minima on liquidus surfaces using the FactSage software and databases and the Mesh Adaptive Direct Search algorithm

    International Nuclear Information System (INIS)

    Gheribi, Aimen E.; Robelin, Christian; Digabel, Sebastien Le; Audet, Charles; Pelton, Arthur D.

    2011-01-01

    Highlights: → Systematic search of low melting temperatures in multicomponent systems. → Calculation of eutectic in multicomponent systems. → The FactSage software and the direct search algorithm are used simultaneously. - Abstract: It is often of interest, for a multicomponent system, to identify the low melting compositions at which local minima of the liquidus surface occur. The experimental determination of these minima can be very time-consuming. An alternative is to employ the CALPHAD approach using evaluated thermodynamic databases containing optimized model parameters giving the thermodynamic properties of all phases as functions of composition and temperature. Liquidus temperatures are then calculated by Gibbs free energy minimization algorithms which access the databases. Several such large databases for many multicomponent systems have been developed over the last 40 years, and calculated liquidus temperatures are generally quite accurate. In principle, one could then search for local liquidus minima by simply calculating liquidus temperatures over a compositional grid. In practice, such an approach is prohibitively time-consuming for all but the simplest systems since the required number of grid points is extremely large. In the present article, the FactSage database computing system is coupled with the powerful Mesh Adaptive Direct Search (MADS) algorithm in order to search for and calculate automatically all liquidus minima in a multicomponent system. Sample calculations for a 4-component oxide system, a 7-component chloride system, and a 9-component ferrous alloy system are presented. It is shown that the algorithm is robust and rapid.

  5. Sensitivity of Global Sea-Air CO2 Flux to Gas Transfer Algorithms, Climatological Wind Speeds, and Variability of Sea Surface Temperature and Salinity

    Science.gov (United States)

    McClain, Charles R.; Signorini, Sergio

    2002-01-01

    Sensitivity analyses of sea-air CO2 flux to gas transfer algorithms, climatological wind speeds, sea surface temperatures (SST) and salinity (SSS) were conducted for the global oceans and selected regional domains. Large uncertainties in the global sea-air flux estimates are identified due to different gas transfer algorithms, global climatological wind speeds, and seasonal SST and SSS data. The global sea-air flux ranges from -0.57 to -2.27 Gt/yr, depending on the combination of gas transfer algorithms and global climatological wind speeds used. Different combinations of SST and SSS global fields resulted in changes as large as 35% on the oceans global sea-air flux. An error as small as plus or minus 0.2 in SSS translates into a plus or minus 43% deviation on the mean global CO2 flux. This result emphasizes the need for highly accurate satellite SSS observations for the development of remote sensing sea-air flux algorithms.

  6. Upper Crustal Shear Structure of NE Wyoming Inverted by Regional Surface Waves From Mining Explosions-Comparison of Niching Genetic Algorithms and Least-Squares Inversion

    Science.gov (United States)

    Zhou, R.; Stump, B. W.

    2001-12-01

    Surface-wave dispersion analysis of regional seismograms from mining explosion is used to extract shallow subsurface structural models. Seismograms along a number of azimuths were recorded at near-regional distances from mining explosions in Northeast Wyoming. The group velocities of fundamental mode Rayleigh wave were determined by using the Multiple Filter Analysis (MFA) and refined by Phase Matched Filtering (PMF) technique. The surface wave dispersion curves covered the period range of 2 to 12 sec and the group-velocities range from 1.3 to 2.9 km/sec. Besides least-squares inversion, a niching genetic algorithm (NGA) was introduced for crustal shear-wave velocity inversion. Niching methods are techniques specifically to maintain diversity and promote the formation and maintenance of stable sub-populations in the tradition genetic algorithm. This methodology identifies multiple candidate solutions when applied to both multimodal optimization and classification problems. Considering the nonuniqueness of inversion problem, the capacity of NGA is explored to retrieve classes of S-wave velocity structural profiles from the dispersion curves. Synthetic tests illustrate the range of nonuniqueness in linear surface wave inversion problems. Application of this new technique to regional surface wave observations from the Powder River Basin provides classes of models from which the one that is most consistent with geologic constraints can be chosen.

  7. The influence of log soaking temperature on surface quality and integrity performance of birch (Betula pendula Roth) veneer

    Science.gov (United States)

    Anti Rohumaa; Toni Antikainen; Christopher G. Hunt; Charles R. Frihart; Mark Hughes

    2016-01-01

    Wood material surface properties play an important role in adhesive bond formation and performance. In the present study, a test method was developed to evaluate the integrity of the wood surface, and the results were used to understand bond performance. Materials used were rotary cut birch (Betula pendula Roth) veneers, produced from logs soaked at 20 or 70 °C prior...

  8. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    Science.gov (United States)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  9. NASA Research on an Integrated Concept for Airport Surface Operations Management

    Science.gov (United States)

    Gupta, Gautam

    2012-01-01

    Surface operations at airports in the US are based on tactical operations, where departure aircraft primarily queue up and wait at the departure runways. There have been attempts to address the resulting inefficiencies with both strategic and tactical tools for metering departure aircraft. This presentation gives an overview of Spot And Runway Departure Advisor with Collaborative Decision Making (SARDA-CDM): an integrated strategic and tactical system for improving surface operations by metering departure aircraft. SARDA-CDM is the augmentation of ground and local controller advisories through sharing of flight movement and related operations information between airport operators, flight operators and air traffic control at the airport. The goal is to enhance the efficiency of airport surface operations by exchanging information between air traffic control and airline operators, while minimizing adverse effects on stakeholders and passengers. The presentation motivates the need for departure metering, and provides a brief background on the previous work on SARDA. Then, the concept of operations for SARDA-CDM is described. Then the preliminary results from testing the concept in a real-time automated simulation environment are described. Results indicate benefits such as reduction in taxiing delay and fuel consumption. Further, the preliminary implementation of SARDA-CDM seems robust for two minutes delay in gate push-back times.

  10. The integration of surface electromyography in the clinical decision making process: a case report

    Science.gov (United States)

    Nicholson, W Reg

    1998-01-01

    Objective: To demonstrate how the findings of surface electromyography (S.E.M.G.) were integrated into the clinical decision-making process. Clinical Features: This is a retrospective review of the file of a 27-year-old male suffering from mechanical low back pain. He was evaluated on 3 separate occasions over a 3 year period. History, radiography, functional outcome studies, visual-numerical pain score, pain drawing, physical examination and surface electromyography were utilized in evaluating this patient. Intervention and Outcome: The two clinical interventions of spinal manipulative therapy (S.M.T.) had positive results in that the patient achieved an asymptomatic state and returned to his position of employment. The S.E.M.G. data collected during the industrial assessment, did not provide the outcome that the patient had anticipated. Conclusion: Surface electromyography is a useful clinical tool in the author’s decision-making process for the treatment of mechanical lower back pain. Therapeutic intervention by S.M.T., therapeutic exercises and rating risk factors were influenced by the S.E.M.G. findings.

  11. An Integrated Optimal Energy Management/Gear-Shifting Strategy for an Electric Continuously Variable Transmission Hybrid Powertrain Using Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Syuan-Yi Chen

    2016-01-01

    Full Text Available This study developed an integrated energy management/gear-shifting strategy by using a bacterial foraging algorithm (BFA in an engine/motor hybrid powertrain with electric continuously variable transmission. A control-oriented vehicle model was constructed on the Matlab/Simulink platform for further integration with developed control strategies. A baseline control strategy with four modes was developed for comparison with the proposed BFA. The BFA was used with five bacterial populations to search for the optimal gear ratio and power-split ratio for minimizing the cost: the equivalent fuel consumption. Three main procedures were followed: chemotaxis, reproduction, and elimination-dispersal. After the vehicle model was integrated with the vehicle control unit with the BFA, two driving patterns, the New European Driving Cycle and the Federal Test Procedure, were used to evaluate the energy consumption improvement and equivalent fuel consumption compared with the baseline. The results show that [18.35%,21.77%] and [8.76%,13.81%] were improved for the optimal energy management and integrated optimization at the first and second driving cycles, respectively. Real-time platform designs and vehicle integration for a dynamometer test will be investigated in the future.

  12. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  13. Assessment of a chair-side argon-based non-thermal plasma treatment on the surface characteristics and integration of dental implants with textured surfaces.

    Science.gov (United States)

    Teixeira, Hellen S; Marin, Charles; Witek, Lukasz; Freitas, Amilcar; Silva, Nelson R F; Lilin, Thomas; Tovar, Nick; Janal, Malvin N; Coelho, Paulo G

    2012-05-01

    The biomechanical effects of a non-thermal plasma (NTP) treatment, suitable for use in a dental office, on the surface character and integration of a textured dental implant surface in a beagle dog model were evaluated. The experiment compared a control treatment, which presented an alumina-blasted/acid-etched (AB/AE) surface, to two experimental treatments, in which the same AB/AE surface also received NTP treatment for a period of 20 or 60 s per implant quadrant (PLASMA 20' and PLASMA 60' groups, respectively). The surface of each specimen was characterized by electron microscopy and optical interferometry, and surface energy and surface chemistry were determined prior to and after plasma treatment. Two implants of each type were then placed at six bilateral locations in 6 dogs, and allowed to heal for 2 or 4 weeks. Following sacrifice, removal torque was evaluated as a function of animal, implant surface and time in vivo in a mixed model ANOVA. Compared to the CONTROL group, PLASMA 20' and 60' groups presented substantially higher surface energy levels, lower amounts of adsorbed C species and significantly higher torque levels (p=.001). Result indicated that the NTP treatment increased the surface energy and the biomechanical fixation of textured-surface dental implants at early times in vivo. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Dedicated algorithm and software for the integrated analysis of AC and DC electrical outputs of piezoelectric vibration energy harvesters

    International Nuclear Information System (INIS)

    Kim, Jae Eum

    2014-01-01

    DC electrical outputs of a piezoelectric vibration energy harvester by nonlinear rectifying circuitry can hardly be obtained either by any mathematical models developed so far or by finite element analysis. To address the issue, this work used an equivalent electrical circuit model and newly developed an algorithm to efficiently identify relevant circuit parameters of arbitrarily-shaped cantilevered piezoelectric energy harvesters. The developed algorithm was then realized as a dedicated software module by adopting ANSYS finite element analysis software for the parameters identification and the Tcl/Tk programming language for a graphical user interface and linkage with ANSYS. For verifications, various AC electrical outputs by the developed software were compared with those by traditional finite element analysis. DC electrical outputs through rectifying circuitry were also examined for varying values of the smoothing capacitance and load resistance.

  15. Surface integrity and part accuracy in reaming and tapping stainless steel with new vegetable based cutting oils

    DEFF Research Database (Denmark)

    Belluco, Walter; De Chiffre, Leonardo

    2002-01-01

    This paper presents an investigation on the effect of new formulations of vegetable oils on surface integrity and part accuracy in reaming and tapping operations with AISI 316L stainless steel. Surface integrity was assessed with measurements of roughness, microhardness, and using metallographic...... as part accuracy. Cutting fluids based on vegetable oils showed comparable or better performance than mineral oils. ÆÉ2002 Published by Elsevier Science Ltd....... techniques, while part accuracy was measured on a coordinate measuring machine. A widely diffused commercial mineral oil was used as reference for all measurements. Cutting fluid was found to have a significant effect on surface integrity and thickness of the strain hardened layer in the sub-surface, as well...

  16. Knowledge extraction algorithm for variances handling of CP using integrated hybrid genetic double multi-group cooperative PSO and DPSO.

    Science.gov (United States)

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2012-04-01

    Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.

  17. Atomistic modeling of metal surfaces under electric fields: direct coupling of electric fields to a molecular dynamics algorithm

    CERN Document Server

    Djurabekova, Flyura; Pohjonen, Aarne; Nordlund, Kai

    2011-01-01

    The effect of electric fields on metal surfaces is fairly well studied, resulting in numerous analytical models developed to understand the mechanisms of ionization of surface atoms observed at very high electric fields, as well as the general behavior of a metal surface in this condition. However, the derivation of analytical models does not include explicitly the structural properties of metals, missing the link between the instantaneous effects owing to the applied field and the consequent response observed in the metal surface as a result of an extended application of an electric field. In the present work, we have developed a concurrent electrodynamic–molecular dynamic model for the dynamical simulation of an electric-field effect and subsequent modification of a metal surface in the framework of an atomistic molecular dynamics (MD) approach. The partial charge induced on the surface atoms by the electric field is assessed by applying the classical Gauss law. The electric forces acting on the partially...

  18. Development of an integrated surface stimulation device for systematic evaluation of wound electrotherapy.

    Science.gov (United States)

    Howe, D S; Dunning, J; Zorman, C; Garverick, S L; Bogie, K M

    2015-02-01

    Ideally, all chronic wounds would be prevented as they can become life threatening complications. The concept that a wound produces a 'current of injury' due to the discontinuity in the electrical field of intact skin provides the basis for the concept that electrical stimulation (ES) may provide an effective treatment for chronic wounds. The optimal stimulation waveform parameters are unknown, limiting the reliability of achieving a successful clinical therapeutic outcome. In order to gain a more thorough understanding of ES for chronic wound therapy, systematic evaluation using a valid in vivo model is required. The focus of the current paper is development of the flexible modular surface stimulation (MSS) device by our group. This device can be programed to deliver a variety of clinically relevant stimulation paradigms and is essential to facilitate systematic in vivo studies. The MSS version 2.0 for small animal use provides all components of a single-channel, programmable current-controlled ES system within a lightweight, flexible, independently-powered portable device. Benchtop testing and validation indicates that custom electronics and control algorithms support the generation of high-voltage, low duty-cycle current pulses in a power-efficient manner, extending battery life and allowing ES therapy to be delivered for up to 7 days without needing to replace or disturb the wound dressing.

  19. Integrated-Optics Components Utilizing Long-Range Surface Plasmon Polaritons

    DEFF Research Database (Denmark)

    Boltasseva, Alexandra

    2004-01-01

    This thesis describes a new class of components for integrated optics, based on the propagation of long-range surface plasmon polaritons (LR-SPPs) along metal stripes embedded in a dielectric. These novel components can provide guiding of light as well as coupling and splitting from/into a number...... fabricated and optically characterized. At 1570 nm, coupling lengths of 1.9 and 0.8 mm are found for directional couplers with waveguides separated 4 and 0 µm, respectively. LR-SPP-based waveguides and waveguide components are modeled using the effective-refractive-index method and a good agreement...... with experimental results is obtained. The interaction of LR-SPPs with photonic crystals (PCs) is also studied. The PC structures are formed by periodic arrays of gold bumps that are arranged in a triangular lattice and placed symmetrically on both sides of a thin gold film. The LR-SPP transmission through...

  20. Integration of thin film giant magnetoimpedance sensor and surface acoustic wave transponder

    KAUST Repository

    Li, Bodong

    2012-03-09

    Passive and remote sensing technology has many potential applications in implantable devices, automation, or structural monitoring. In this paper, a tri-layer thin film giant magnetoimpedance (GMI) sensor with the maximum sensitivity of 16%/Oe and GMI ratio of 44% was combined with a two-port surface acoustic wave(SAW) transponder on a common substrate using standard microfabrication technology resulting in a fully integrated sensor for passive and remote operation. The implementation of the two devices has been optimized by on-chip matching circuits. The measurement results clearly show a magnetic field response at the input port of the SAW transponder that reflects the impedance change of the GMI sensor.