Nikolic, Dejan; Stojkovic, Nikola; Lekic, Nikola
2018-04-09
To obtain the complete operational picture of the maritime situation in the Exclusive Economic Zone (EEZ) which lies over the horizon (OTH) requires the integration of data obtained from various sensors. These sensors include: high frequency surface-wave-radar (HFSWR), satellite automatic identification system (SAIS) and land automatic identification system (LAIS). The algorithm proposed in this paper utilizes radar tracks obtained from the network of HFSWRs, which are already processed by a multi-target tracking algorithm and associates SAIS and LAIS data to the corresponding radar tracks, thus forming an integrated data pair. During the integration process, all HFSWR targets in the vicinity of AIS data are evaluated and the one which has the highest matching factor is used for data association. On the other hand, if there is multiple AIS data in the vicinity of a single HFSWR track, the algorithm still makes only one data pair which consists of AIS and HFSWR data with the highest mutual matching factor. During the design and testing, special attention is given to the latency of AIS data, which could be very high in the EEZs of developing countries. The algorithm is designed, implemented and tested in a real working environment. The testing environment is located in the Gulf of Guinea and includes a network of HFSWRs consisting of two HFSWRs, several coastal sites with LAIS receivers and SAIS data provided by provider of SAIS data.
An integrated study of surface roughness in EDM process using regression analysis and GSO algorithm
Zainal, Nurezayana; Zain, Azlan Mohd; Sharif, Safian; Nuzly Abdull Hamed, Haza; Mohamad Yusuf, Suhaila
2017-09-01
The aim of this study is to develop an integrated study of surface roughness (Ra) in the die-sinking electrical discharge machining (EDM) process of Ti-6AL-4V titanium alloy with positive polarity of copper-tungsten (Cu-W) electrode. Regression analysis and glowworm swarm optimization (GSO) algorithm were considered for modelling and optimization process. Pulse on time (A), pulse off time (B), peak current (C) and servo voltage (D) were selected as the machining parameters with various levels. The experiments have been conducted based on the two levels of full factorial design with an added center point design of experiments (DOE). Moreover, mathematical models with linear and 2 factor interaction (2FI) effects of the parameters chosen were developed. The validity test of the fit and the adequacy of the developed mathematical models have been carried out by using analysis of variance (ANOVA) and F-test. The statistical analysis showed that the 2FI model outperformed with the most minimal value of Ra compared to the linear model and experimental result.
A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver
Liu, Yang
2015-10-26
© 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT)-based surface integral equation (SIE) solvers, it reduces the computational and memory costs of transient analysis from equation and equation to equation and equation, respectively, where Nt and Ns denote the number of temporal and spatial unknowns (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). In the past, PWTD-accelerated MOT-SIE solvers have been applied to transient problems involving half million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). Recently, a scalable parallel PWTD-accelerated MOT-SIE solver that leverages a hiearchical parallelization strategy has been developed and successfully applied to the transient problems involving ten million spatial unknowns (Liu et. al., in URSI Digest, 2013). We further enhanced the capabilities of this solver by implementing a compression scheme based on local cosine wavelet bases (LCBs) that exploits the sparsity in the temporal dimension (Liu et. al., in URSI Digest, 2014). Specifically, the LCB compression scheme was used to reduce the memory requirement of the PWTD ray data and computational cost of operations in the PWTD translation stage.
A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver
Liu, Yang; Yucel, Abdulkadir C.; Gilbert, Anna C.; Bagci, Hakan; Michielssen, Eric
2015-01-01
© 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Directory of Open Access Journals (Sweden)
Ji Zhou
2014-06-01
Full Text Available The land surface temperature (LST is one of the most important parameters of surface-atmosphere interactions. Methods for retrieving LSTs from satellite remote sensing data are beneficial for modeling hydrological, ecological, agricultural and meteorological processes on Earth’s surface. Many split-window (SW algorithms, which can be applied to satellite sensors with two adjacent thermal channels located in the atmospheric window between 10 μm and 12 μm, require auxiliary atmospheric parameters (e.g., water vapor content. In this research, the Heihe River basin, which is one of the most arid regions in China, is selected as the study area. The Moderate-resolution Imaging Spectroradiometer (MODIS is selected as a test case. The Global Data Assimilation System (GDAS atmospheric profiles of the study area are used to generate the training dataset through radiative transfer simulation. Significant correlations between the atmospheric upwelling radiance in MODIS channel 31 and the other three atmospheric parameters, including the transmittance in channel 31 and the transmittance and upwelling radiance in channel 32, are trained based on the simulation dataset and formulated with three regression models. Next, the genetic algorithm is used to estimate the LST. Validations of the RM-GA method are based on the simulation dataset generated from in situ measured radiosonde profiles and GDAS atmospheric profiles, the in situ measured LSTs, and a pair of daytime and nighttime MOD11A1 products in the study area. The results demonstrate that RM-GA has a good ability to estimate the LSTs directly from the MODIS data without any auxiliary atmospheric parameters. Although this research is for local application in the Heihe River basin, the findings and proposed method can easily be extended to other satellite sensors and regions with arid climates and high elevations.
Algorithm FIRE-Feynman Integral REduction
International Nuclear Information System (INIS)
Smirnov, A.V.
2008-01-01
The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.
Integrated Surface Dataset (Global)
National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithms For Integrating Nonlinear Differential Equations
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
Collective probabilities algorithm for surface hopping calculations
International Nuclear Information System (INIS)
Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto
2003-01-01
General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method
An algorithm of computing inhomogeneous differential equations for definite integrals
Nakayama, Hiromasa; Nishiyama, Kenta
2010-01-01
We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.
[Ocular surface system integrity].
Safonova, T N; Pateyuk, L S
2015-01-01
The interplay of different structures belonging to either the anterior segment of the eye or its accessory visual apparatus, which all share common embryological, anatomical, functional, and physiological features, is discussed. Explanation of such terms, as ocular surface, lacrimal functional unit, and ocular surface system, is provided.
A Novel Algorithm of Surface Eliminating in Undersurface Optoacoustic Imaging
Directory of Open Access Journals (Sweden)
Zhulina Yulia V
2004-01-01
Full Text Available This paper analyzes the task of optoacoustic imaging of the objects located under the surface covering them. In this paper, we suggest the algorithm of the surface eliminating based on the fact that the intensity of the image as a function of the spatial point should change slowly inside the local objects, and will suffer a discontinuity of the spatial gradients on their boundaries. The algorithm forms the 2-dimensional curves along which the discontinuity of the signal derivatives is detected. Then, the algorithm divides the signal space into the areas along these curves. The signals inside the areas with the maximum level of the signal amplitudes and the maximal gradient absolute values on their edges are put equal to zero. The rest of the signals are used for the image restoration. This method permits to reconstruct the picture of the surface boundaries with a higher contrast than that of the surface detection technique based on the maximums of the received signals. This algorithm does not require any prior knowledge of the signals' statistics inside and outside the local objects. It may be used for reconstructing any images with the help of the signals representing the integral over the object's volume. Simulation and real data are also provided to validate the proposed method.
A Source Identification Algorithm for INTEGRAL
Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.
2008-12-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.
Application of integral-separated PID algorithm in orbit feedback
International Nuclear Information System (INIS)
Xuan, K.; Bao, X.; Li, C.; Li, W.; Liu, G.; Wang, J.; Wang, L.
2012-01-01
The algorithm in the feedback system has important influence on the performance of the beam orbit. PID (Proportion Integration Differentiation) algorithm is widely used in the beam orbit feedback system; however, the deficiency of PID algorithm is a big overshooting in strong perturbations. In order to overcome the deficiencies, the integral-separated PID algorithm is developed. When the closed orbit distortion is too large, it cancels integration action until the closed orbit distortion is lower than the separation threshold value. The implementation of integral-separated PID algorithm with MATLAB is described in this paper. The simulation results show that this algorithm can improve the control precision. (authors)
ISINA: INTEGRAL Source Identification Network Algorithm
Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.
2008-11-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk
Parallel Algorithm for Adaptive Numerical Integration
International Nuclear Information System (INIS)
Sujatmiko, M.; Basarudin, T.
1997-01-01
This paper presents an automation algorithm for integration using adaptive trapezoidal method. The interval is adaptively divided where the width of sub interval are different and fit to the behavior of its function. For a function f, an integration on interval [a,b] can be obtained, with maximum tolerance ε, using estimation (f, a, b, ε). The estimated solution is valid if the error is still in a reasonable range, fulfil certain criteria. If the error is big, however, the problem is solved by dividing it into to similar and independent sub problem on to separate [a, (a+b)/2] and [(a+b)/2, b] interval, i. e. ( f, a, (a+b)/2, ε/2) and (f, (a+b)/2, b, ε/2) estimations. The problems are solved in two different kinds of processor, root processor and worker processor. Root processor function ti divide a main problem into sub problems and distribute them to worker processor. The division mechanism may go further until all of the sub problem are resolved. The solution of each sub problem is then submitted to the root processor such that the solution for the main problem can be obtained. The algorithm is implemented on C-programming-base distributed computer networking system under parallel virtual machine platform
Integrated Association Rules Complete Hiding Algorithms
Directory of Open Access Journals (Sweden)
Mohamed Refaat Abdellah
2017-01-01
Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
High speed numerical integration algorithm using FPGA | Razak ...
African Journals Online (AJOL)
Conventionally, numerical integration algorithm is executed in software and time consuming to accomplish. Field Programmable Gate Arrays (FPGAs) can be used as a much faster, very efficient and reliable alternative to implement the numerical integration algorithm. This paper proposed a hardware implementation of four ...
International Nuclear Information System (INIS)
Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars
2012-01-01
In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process
Energy Technology Data Exchange (ETDEWEB)
Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)
2012-10-15
In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.
Improved algorithm for surface display from volumetric data
International Nuclear Information System (INIS)
Lobregt, S.; Schaars, H.W.G.K.; OpdeBeek, J.C.A.; Zonneveld, F.W.
1988-01-01
A high-resolution surface display is produced from three-dimensional datasets (computed tomography or magnetic resonance imaging). Unlike other voxel-based methods, this algorithm does not show a cuberille surface structure, because the surface orientation is calculated from original gray values. The applied surface shading is a function of local orientation and position of the surface and of a virtual light source, giving a realistic impression of the surface of bone and soft tissue. The projection and shading are table driven, combining variable viewpoint and illumination conditions with speed. Other options are cutplane gray-level display and surface transparency. Combined with volume scanning, this algorithm offers powerful application possibilities
Energy conservation in Newmark based time integration algorithms
DEFF Research Database (Denmark)
Krenk, Steen
2006-01-01
Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... of motion, an extended form including structural damping, and finally the generalized form including structural as well as algorithmic damping. In all three cases the expression for energy, appearing in the balance equation, is the mechanical energy plus some additional terms generated by the discretization...
A New Algorithm for System of Integral Equations
Directory of Open Access Journals (Sweden)
Abdujabar Rasulov
2014-01-01
Full Text Available We develop a new algorithm to solve the system of integral equations. In this new method no need to use matrix weights. Beacause of it, we reduce computational complexity considerable. Using the new algorithm it is also possible to solve an initial boundary value problem for system of parabolic equations. To verify the efficiency, the results of computational experiments are given.
Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course
Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio
2012-01-01
In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…
Bianchi surfaces: integrability in an arbitrary parametrization
International Nuclear Information System (INIS)
Nieszporski, Maciej; Sym, Antoni
2009-01-01
We discuss integrability of normal field equations of arbitrarily parametrized Bianchi surfaces. A geometric definition of the Bianchi surfaces is presented as well as the Baecklund transformation for the normal field equations in an arbitrarily chosen surface parametrization.
Integrated artificial intelligence algorithm for skin detection
Directory of Open Access Journals (Sweden)
Bush Idoko John
2018-01-01
Full Text Available The detection of skin colour has been a useful and renowned technique due to its wide range of application in both analyses based on diagnostic and human computer interactions. Various problems could be solved by simply providing an appropriate method for pixel-like skin parts. Presented in this study is a colour segmentation algorithm that works directly in RGB colour space without converting the colour space. Genfis function as used in this study formed the Sugeno fuzzy network and utilizing Fuzzy C-Mean (FCM clustering rule, clustered the data and for each cluster/class a rule is generated. Finally, corresponding output from data mapping of pseudo-polynomial is obtained from input dataset to the adaptive neuro fuzzy inference system (ANFIS.
Canonical algorithms for numerical integration of charged particle motion equations
Efimov, I. N.; Morozov, E. A.; Morozova, A. R.
2017-02-01
A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.
An Algorithm for Investigating the Structure of Material Surfaces
Directory of Open Access Journals (Sweden)
M. Toman
2003-01-01
Full Text Available The aim of this paper is to summarize the algorithm and the experience that have been achieved in the investigation of grain structure of surfaces of certain materials, particularly from samples of gold. The main parts of the algorithm to be discussed are:1. acquisition of input data,2. localization of grain region,3. representation of grain size,4. representation of outputs (postprocessing.
Fast algorithm for the rendering of three-dimensional surfaces
Pritt, Mark D.
1994-02-01
It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.
Firefly Algorithm for Polynomial Bézier Surface Parameterization
Directory of Open Access Journals (Sweden)
Akemi Gálvez
2013-01-01
reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Global structural optimizations of surface systems with a genetic algorithm
International Nuclear Information System (INIS)
Chuang, Feng-Chuan
2005-01-01
Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al n (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V
An Algorithm for Managing Aircraft Movement on an Airport Surface
Directory of Open Access Journals (Sweden)
Giuseppe Maresca
2013-08-01
Full Text Available The present paper focuses on the development of an algorithm for safely and optimally managing the routing of aircraft on an airport surface in future airport operations. This tool is intended to support air traffic controllers’ decision-making in selecting the paths of all aircraft and the engine startup approval time for departing ones. Optimal routes are sought for minimizing the time both arriving and departing aircraft spend on an airport surface with engines on, with benefits in terms of safety, efficiency and costs. The proposed algorithm first computes a standalone, shortest path solution from runway to apron or vice versa, depending on the aircraft being inbound or outbound, respectively. For taking into account the constraints due to other traffic on an airport surface, this solution is amended by a conflict detection and resolution task that attempts to reduce and possibly nullify the number of conflicts generated in the first phase. An example application on a simple Italian airport exemplifies how the algorithm can be applied to true-world applications. Emphasis is given on how to model an airport surface as a weighted and directed graph with non-negative weights, as required for the input to the algorithm.
Non-integrability of geodesic flow on certain algebraic surfaces
International Nuclear Information System (INIS)
Waters, T.J.
2012-01-01
This Letter addresses an open problem recently posed by V. Kozlov: a rigorous proof of the non-integrability of the geodesic flow on the cubic surface xyz=1. We prove this is the case using the Morales–Ramis theorem and Kovacic algorithm. We also consider some consequences and extensions of this result. -- Highlights: ► The behaviour of geodesics on surfaces defined by algebraic expressions is studied. ► The non-integrability of the geodesic equations is rigorously proved using differential Galois theory. ► Morales–Ramis theory and Kovacic's algorithm is used and the normal variational equation is of Fuchsian type. ► Some extensions and limitations are discussed.
Linear-time general decoding algorithm for the surface code
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
SINS/CNS Nonlinear Integrated Navigation Algorithm for Hypersonic Vehicle
Directory of Open Access Journals (Sweden)
Yong-jun Yu
2015-01-01
Full Text Available Celestial Navigation System (CNS has characteristics of accurate orientation and strong autonomy and has been widely used in Hypersonic Vehicle. Since the CNS location and orientation mainly depend upon the inertial reference that contains errors caused by gyro drifts and other error factors, traditional Strap-down Inertial Navigation System (SINS/CNS positioning algorithm setting the position error between SINS and CNS as measurement is not effective. The model of altitude azimuth, platform error angles, and horizontal position is designed, and the SINS/CNS tightly integrated algorithm is designed, in which CNS altitude azimuth is set as measurement information. GPF (Gaussian particle filter is introduced to solve the problem of nonlinear filtering. The results of simulation show that the precision of SINS/CNS algorithm which reaches 130 m using three stars is improved effectively.
Iterative algorithm for the volume integral method for magnetostatics problems
International Nuclear Information System (INIS)
Pasciak, J.E.
1980-11-01
Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived
Advanced computer algebra algorithms for the expansion of Feynman integrals
International Nuclear Information System (INIS)
Ablinger, Jakob; Round, Mark; Schneider, Carsten
2012-10-01
Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+ε-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.
Advanced computer algebra algorithms for the expansion of Feynman integrals
Energy Technology Data Exchange (ETDEWEB)
Ablinger, Jakob; Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2012-10-15
Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+{epsilon}-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.
Multifeature Fusion Vehicle Detection Algorithm Based on Choquet Integral
Directory of Open Access Journals (Sweden)
Wenhui Li
2014-01-01
Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to
An integral conservative gridding-algorithm using Hermitian curve interpolation
International Nuclear Information System (INIS)
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-01-01
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to
An analysis of 3D particle path integration algorithms
International Nuclear Information System (INIS)
Darmofal, D.L.; Haimes, R.
1996-01-01
Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow
ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems
Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee
2016-01-01
NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.
Energy Technology Data Exchange (ETDEWEB)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
Integrated biomechanical and topographical surface characterization (IBTSC)
Energy Technology Data Exchange (ETDEWEB)
Löberg, Johanna, E-mail: Johanna.Loberg@dentsply.com [Dentsply Implants, Box 14, SE-431 21 Mölndal (Sweden); Mattisson, Ingela [Dentsply Implants, Box 14, SE-431 21 Mölndal (Sweden); Ahlberg, Elisabet [Department of Chemistry and Molecular Biology, University of Gothenburg, SE-41296 Gothenburg (Sweden)
2014-01-30
In an attempt to reduce the need for animal studies in dental implant applications, a new model has been developed which combines well-known surface characterization methods with theoretical biomechanical calculations. The model has been named integrated biomechanical and topographical surface characterization (IBTSC), and gives a comprehensive description of the surface topography and the ability of the surface to induce retention strength with bone. IBTSC comprises determination of 3D-surface roughness parameters by using 3D-scanning electron microscopy (3D-SEM) and atomic force microscopy (AFM), and calculation of the ability of different surface topographies to induce retention strength in bone by using the local model. Inherent in this integrated approach is the use of a length scale analysis, which makes it possible to separate different size levels of surface features. The IBTSC concept is tested on surfaces with different level of hierarchy, induced by mechanical as well as chemical treatment. Sequential treatment with oxalic and hydrofluoric acid results in precipitated nano-sized features that increase the surface roughness and the surface slope on the sub-micro and nano levels. This surface shows the highest calculated shear strength using the local model. The validity, robustness and applicability of the IBTSC concept are demonstrated and discussed.
Theoretical algorithms for satellite-derived sea surface temperatures
Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.
1989-03-01
Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.
Integration by cell algorithm for Slater integrals in a spline basis
International Nuclear Information System (INIS)
Qiu, Y.; Fischer, C.F.
1999-01-01
An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems
Numerical Algorithms for Acoustic Integrals - The Devil is in the Details
Brentner, Kenneth S.
1996-01-01
The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.
Integrable mappings via rational elliptic surfaces
International Nuclear Information System (INIS)
Tsuda, Teruhisa
2004-01-01
We present a geometric description of the QRT map (which is an integrable mapping introduced by Quispel, Roberts and Thompson) in terms of the addition formula of a rational elliptic surface. By this formulation, we classify all the cases when the QRT map is periodic; and show that its period is 2, 3, 4, 5 or 6. A generalization of the QRT map which acts birationally on a pencil of K3 surfaces, or Calabi-Yau manifolds, is also presented
Developing an integrated digitizing and display surface
Hipple, James D.; Wedding, Daniel K.; Wedding, Donald K., Sr.
1995-04-01
The development of an integrated digitizing and display surface, which utilizes touch entry and flat panel display (FPD) technology, is a significant hardware advance in the field of geographic information systems (GIS). Inherent qualities of the FPD, notably the ac gas plasma display, makes such a marriage inevitable. Large diagonal sizes, high resolution color, screen flatness, and monitor thickness are desirable features of an integrated digitizing and display surface. Recently, the GIS literature has addressed a need for such an innovation. The development of graphics displays based on sophisticated technologies include `photorealistic' (or high definition) imaging at resolutions of 2048 X 2048 or greater, palates of 16.7 million colors, formats greater than 30 inches diagonal, and integrated touch entry. In this paper, there is an evaluation of FPDs and data input technologies in the development of such a product.
Okumura, Hisashi; Itoh, Satoru G; Okamoto, Yuko
2007-02-28
The authors propose explicit symplectic integrators of molecular dynamics (MD) algorithms for rigid-body molecules in the canonical and isobaric-isothermal ensembles. They also present a symplectic algorithm in the constant normal pressure and lateral surface area ensemble and that combined with the Parrinello-Rahman algorithm. Employing the symplectic integrators for MD algorithms, there is a conserved quantity which is close to Hamiltonian. Therefore, they can perform a MD simulation more stably than by conventional nonsymplectic algorithms. They applied this algorithm to a TIP3P pure water system at 300 K and compared the time evolution of the Hamiltonian with those by the nonsymplectic algorithms. They found that the Hamiltonian was conserved well by the symplectic algorithm even for a time step of 4 fs. This time step is longer than typical values of 0.5-2 fs which are used by the conventional nonsymplectic algorithms.
Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit
2016-01-01
In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…
Integrated control algorithms for plant environment in greenhouse
Zhang, Kanyu; Deng, Lujuan; Gong, Youmin; Wang, Shengxue
2003-09-01
In this paper a survey of plant environment control in artificial greenhouse was put forward for discussing the future development. Firstly, plant environment control started with the closed loop control of air temperature in greenhouse. With the emergence of higher property computer, the adaptive control algorithm and system identification were integrated into the control system. As adaptation control is more depending on observation of variables by sensors and yet many variables are unobservable or difficult to observe, especially for observation of crop growth status, so model-based control algorithm were developed. In order to evade modeling difficulty, one method is predigesting the models and the other method is utilizing fuzzy logic and neural network technology that realize the models by the black box and gray box theory. Studies on control method of plant environment in greenhouse by means of expert system (ES) and artificial intelligence (AI) have been initiated and developed. Nowadays, the research of greenhouse environment control focus on energy saving, optimal economic profit, enviornment protection and continualy develop.
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
An airport surface surveillance solution based on fusion algorithm
Liu, Jianliang; Xu, Yang; Liang, Xuelin; Yang, Yihuang
2017-01-01
In this paper, we propose an airport surface surveillance solution combined with Multilateration (MLAT) and Automatic Dependent Surveillance Broadcast (ADS-B). The moving target to be monitored is regarded as a linear stochastic hybrid system moving freely and each surveillance technology is simplified as a sensor with white Gaussian noise. The dynamic model of target and the observation model of sensor are established in this paper. The measurements of sensors are filtered properly by estimators to get the estimation results for current time. Then, we analysis the characteristics of two fusion solutions proposed, and decide to use the scheme based on sensor estimation fusion for our surveillance solution. In the proposed fusion algorithm, according to the output of estimators, the estimation error is quantified, and the fusion weight of each sensor is calculated. The two estimation results are fused with weights, and the position estimation of target is computed accurately. Finally the proposed solution and algorithm are validated by an illustrative target tracking simulation.
Covariant path integrals on hyperbolic surfaces
Schaefer, Joe
1997-11-01
DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ΓH. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).
International Nuclear Information System (INIS)
Yu Guofu; Duan Qihua
2010-01-01
In this paper, based on the Hirota bilinear method, a reliable algorithm for generating the bilinear Baecklund transformation (BT) of integrable hierarchies is described. With the help of Maple symbolic computation the algorithm would be very helpful and powerful for looking for the bilinear BT of integrable systems especially for those high-order integrable hierarchies. The BTs of bilinear Ramani hierarchy are deduced for the first time by using the algorithm.
An Algorithm for Integrated Subsystem Embodiment and System Synthesis
Lewis, Kemper
1997-01-01
Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.
Covariant path integrals on hyperbolic surfaces
International Nuclear Information System (INIS)
Schaefer, J.
1997-01-01
DeWitt close-quote s covariant formulation of path integration [B. De Witt, open-quotes Dynamical theory in curved spaces. I. A review of the classical and quantum action principles,close quotes Rev. Mod. Phys. 29, 377 endash 397 (1957)] has two practical advantages over the traditional methods of open-quotes lattice approximations;close quotes there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli endash DeWitt curvature correction term arises, as in DeWitt close-quote s work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman endash Kac formula for the automorphic Schroedinger equation on the Riemann surface Γ backslash H. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47 endash 90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, open-quotes The path integral on the Poincare upper half plane and for Liouville quantum mechanics,close quotes Phys. Lett. A 123, 319 endash 328 (1987). copyright 1997 American Institute of Physics
An algorithm to construct Groebner bases for solving integration by parts relations
International Nuclear Information System (INIS)
Smirnov, Alexander V.
2006-01-01
This paper is a detailed description of an algorithm based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. The algorithm is used to calculate Feynman integrals and has proved to be efficient in several complicated cases
Signal Integrity Applications of an EBG Surface
Directory of Open Access Journals (Sweden)
MATEKOVITS, L.
2015-05-01
Full Text Available Electromagnetic band-gap (EBG surfaces have found applications in mitigation of parallel-plate noise that occurs in high speed circuits. A 2D periodic structure previously introduced by the same authors is dimensioned here for adjusting EBG parameters in view of meeting applications requirements by decreasing the phase velocity of the propagating waves. This adjustment corresponds to decreasing the lower bound of the EBG spectra. The positions of the EBGs' in frequency are determined through full-wave simulation, by solving the corresponding eigenmode equation and by imposing the appropriate boundary conditions on all faces of the unit cell. The operation of a device relying on a finite surface is also demonstrated. Obtained results show that the proposed structure fits for the signal integrity related applications as verified also by comparing the transmission along a finite structure of an ideal signal line and one with an induced discontinuity.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is
Modified Firefly Algorithm based controller design for integrating and unstable delay processes
Directory of Open Access Journals (Sweden)
A. Gupta
2016-03-01
Full Text Available In this paper, Modified Firefly Algorithm has been used for optimizing the controller parameters of Smith predictor structure. The proposed algorithm modifies the position formula of the standard Firefly Algorithm in order to achieve faster convergence rate. Performance criteria Integral Square Error (ISE is optimized using this optimization technique. Simulation results show high performance for Modified Firefly Algorithm as compared to conventional Firefly Algorithm in terms of convergence rate. Integrating and unstable delay processes are taken as examples to indicate the performance of the proposed method.
On a New Family of Kalman Filter Algorithms for Integrated Navigation
Mahboub, V.; Saadatseresht, M.; Ardalan, A. A.
2017-09-01
Here we present a review on a new family of Kalman filter algorithms which recently developed for integrated navigation. In particular it is useful for vision based navigation due to the type of data. Here we mainly focus on three algorithms namely weighted Total Kalman filter (WTKF), integrated Kalman filter (IKF) and constrained integrated Kalman filter (CIKF). The common characteristic of these algorithms is that they can consider the neglected random observed quantities which may appear in the dynamic model. Moreover, our approach makes use of condition equations and straightforward variance propagation rules. The WTKF algorithm can deal with problems with arbitrary weight matrixes. Both of the observation equations and system equations can be dynamic-errors-in-variables (DEIV) models in the IKF algorithms. In some problems a quadratic constraint may exist. They can be solved by CIKF algorithm. Finally, we compare four algorithms WTKF, IKF, CIKF and EKF in numerical examples.
Energy Technology Data Exchange (ETDEWEB)
Lester, Brian T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.
Energy Technology Data Exchange (ETDEWEB)
Lester, Brian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.
Near-Surface Engineered Environmental Barrier Integrity
International Nuclear Information System (INIS)
Piet, S.J.; Breckenridge, R.P.
2002-01-01
The INEEL Environmental Systems Research and Analysis (ESRA) program has launched a new R and D project on Near-Surface Engineered Environmental Barrier Integrity to increase knowledge and capabilities for using engineering and ecological components to improve the integrity of near-surface barriers used to confine contaminants from the public and the environment. The knowledge gained and the capabilities built will help verify the adequacy of past remedial decisions and enable improved solutions for future cleanup decisions. The research is planned to (a) improve the knowledge of degradation mechanisms (weathering, biological, geological, chemical, radiological, and catastrophic) in times shorter than service life, (b) improve modeling of barrier degradation dynamics, (c) develop sensor systems to identify degradation prior to failure, and (d) provide a better basis for developing and testing of new barrier systems to increase reliability and reduce the risk of failure. Our project combine s selected exploratory studies (benchtop and field scale), coupled effects accelerated aging testing and the meso-scale, testing of new monitoring concepts, and modeling of dynamic systems. The performance of evapo-transpiration, capillary, and grout-based barriers will be examined
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Multisensor satellite data integration for sea surface wind speed and direction determination
Glackin, D. L.; Pihos, G. G.; Wheelock, S. L.
1984-01-01
Techniques to integrate meteorological data from various satellite sensors to yield a global measure of sea surface wind speed and direction for input to the Navy's operational weather forecast models were investigated. The sensors were launched or will be launched, specifically the GOES visible and infrared imaging sensor, the Nimbus-7 SMMR, and the DMSP SSM/I instrument. An algorithm for the extrapolation to the sea surface of wind directions as derived from successive GOES cloud images was developed. This wind veering algorithm is relatively simple, accounts for the major physical variables, and seems to represent the best solution that can be found with existing data. An algorithm for the interpolation of the scattered observed data to a common geographical grid was implemented. The algorithm is based on a combination of inverse distance weighting and trend surface fitting, and is suited to combing wind data from disparate sources.
INTEGRATION OF HETEROGENOUS DIGITAL SURFACE MODELS
Directory of Open Access Journals (Sweden)
R. Boesch
2012-08-01
distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2 has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement" uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion" an anisotropic inverse distance weighting (IDW will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library, GDAL (Geospatial Data Abstraction Library and OpenCV (Open Source Computer Vision.
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Design integration of liquid surface divertors
International Nuclear Information System (INIS)
Nygren, R.E.; Cowgill, D.F.; Ulrickson, M.A.; Nelson, B.E.; Fogarty, P.J.; Rognlien, T.D.; Rensink, M.E.; Hassanein, A.; Smolentsev, S.S.; Kotschenreuther, M.
2004-01-01
The US Enabling Technology Program in fusion is investigating the use of free flowing liquid surfaces facing the plasma. We have been studying the issues in integrating a liquid surface divertor into a configuration based upon an advanced tokamak, specifically the ARIES-RS configuration. The simplest form of such a divertor is to extend the flow of the liquid first wall into the divertor and thereby avoid introducing additional fluid streams. In this case, one can modify the flow above the divertor to enhance thermal mixing. For divertors with flowing liquid metals (or other electrically conductive fluids) MHD (magneto-hydrodynamics) effects are a major concern and can produce forces that redirect flow and suppress turbulence. An evaluation of Flibe (a molten salt) as a working fluid was done to assess a case in which the MHD forces could be largely neglected. Initial studies indicate that, for a tokamak with high power density, an integrated Flibe first wall and divertor does not seem workable. We have continued work with molten salts and replaced Flibe with Flinabe, a mixture of lithium, sodium and beryllium fluorides, that has some potential because of its lower melting temperature. Sn and Sn-Li have also been considered, and the initial evaluations on heat removal with minimal plasma contamination show promise, although the complicated 3D MHD flows cannot yet be fully modeled. Particle pumping in these design concepts is accomplished by conventional means (ports and pumps). However, trapping of hydrogen in these flowing liquids seems plausible and novel concepts for entrapping helium are also being studied
Voytishek, Anton V.; Shipilov, Nikolay M.
2017-11-01
In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.
A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale
Decherchi, Sergio; Rocchia, Walter
2013-01-01
We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073
Surface roughness optimization in machining of AZ31 magnesium alloy using ABC algorithm
Directory of Open Access Journals (Sweden)
Abhijith
2018-01-01
Full Text Available Magnesium alloys serve as excellent substitutes for materials traditionally used for engine block heads in automobiles and gear housings in aircraft industries. AZ31 is a magnesium alloy finds its applications in orthopedic implants and cardiovascular stents. Surface roughness is an important parameter in the present manufacturing sector. In this work optimization techniques namely firefly algorithm (FA, particle swarm optimization (PSO and artificial bee colony algorithm (ABC which are based on swarm intelligence techniques, have been implemented to optimize the machining parameters namely cutting speed, feed rate and depth of cut in order to achieve minimum surface roughness. The parameter Ra has been considered for evaluating the surface roughness. Comparing the performance of ABC algorithm with FA and PSO algorithm, which is a widely used optimization algorithm in machining studies, the results conclude that ABC produces better optimization when compared to FA and PSO for optimizing surface roughness of AZ 31.
Alawadi, Fahad
2010-10-01
Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was
Institute of Scientific and Technical Information of China (English)
YANGGuo-Sheng; WENCheng-Lin; TANMin
2004-01-01
A new multisensor distributed track fusion algorithm is put forward based on combiningthe feedback integration with the strong tracking Kalman filter. Firstly, an effective tracking gateis constructed by taking the intersection of the tracking gates formed before and after feedback.Secondly, on the basis of the constructed effective tracking gate, probabilistic data association andstrong tracking Kalman filter are combined to form the new multisensor distributed track fusionalgorithm. At last, simulation is performed on the original algorithm and the algorithm presented.
Review of the convolution algorithm for evaluating service integrated systems
DEFF Research Database (Denmark)
Iversen, Villy Bæk
1997-01-01
In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation...
Microfabricated Microwave-Integrated Surface Ion Trap
Revelle, Melissa C.; Blain, Matthew G.; Haltli, Raymond A.; Hollowell, Andrew E.; Nordquist, Christopher D.; Maunz, Peter
2017-04-01
Quantum information processing holds the key to solving computational problems that are intractable with classical computers. Trapped ions are a physical realization of a quantum information system in which qubits are encoded in hyperfine energy states. Coupling the qubit states to ion motion, as needed for two-qubit gates, is typically accomplished using Raman laser beams. Alternatively, this coupling can be achieved with strong microwave gradient fields. While microwave radiation is easier to control than a laser, it is challenging to precisely engineer the radiated microwave field. Taking advantage of Sandia's microfabrication techniques, we created a surface ion trap with integrated microwave electrodes with sub-wavelength dimensions. This multi-layered device permits co-location of the microwave antennae and the ion trap electrodes to create localized microwave gradient fields and necessary trapping fields. Here, we characterize the trap design and present simulated microwave performance with progress towards experimental results. This research was funded, in part, by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA).
International Nuclear Information System (INIS)
Mesgarani, H; Parmour, P; Aghazadeh, N
2010-01-01
In this paper, we apply Aitken extrapolation and epsilon algorithm as acceleration technique for the solution of a weakly singular nonlinear Volterra integral equation of the second kind. In this paper, based on Tao and Yong (2006 J. Math. Anal. Appl. 324 225-37.) the integral equation is solved by Navot's quadrature formula. Also, Tao and Yong (2006) for the first time applied Richardson extrapolation to accelerating convergence for the weakly singular nonlinear Volterra integral equations of the second kind. To our knowledge, this paper may be the first attempt to apply Aitken extrapolation and epsilon algorithm for the weakly singular nonlinear Volterra integral equations of the second kind.
Kuzishchin, V. F.; Merzlikina, E. I.; Van Va, Hoang
2017-11-01
The problem of PID and PI-algorithms tuning by means of the approximation by the least square method of the frequency response of a linear algorithm to the sub-optimal algorithm is considered. The advantage of the method is that the parameter values are obtained through one cycle of calculation. Recommendations how to choose the parameters of the least square method taking into consideration the plant dynamics are given. The parameters mentioned are the time constant of the filter, the approximation frequency range and the correction coefficient for the time delay parameter. The problem is considered for integrating plants for some practical cases (the level control system in a boiler drum). The transfer function of the suboptimal algorithm is determined relating to the disturbance that acts in the point of the control impact input, it is typical for thermal plants. In the recommendations it is taken into consideration that the overregulation for the transient process when the setpoint is changed is also limited. In order to compare the results the systems under consideration are also calculated by the classical method with the limited frequency oscillation index. The results given in the paper can be used by specialists dealing with tuning systems with the integrating plants.
Algorithms for singularities and real structures of weak Del Pezzo surfaces
Lubbes, Niels
2014-01-01
. Wall, Real forms of smooth del Pezzo surfaces, J. Reine Angew. Math. 1987(375/376) (1987) 47-66, ISSN 0075-4102] of weak Del Pezzo surfaces from an algorithmic point of view. It is well-known that the singularities of weak Del Pezzo surfaces correspond
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Yan, Kang K; Zhao, Hongyu; Pang, Herbert
2017-12-06
High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.
Directory of Open Access Journals (Sweden)
Ying Qu
2015-01-01
Full Text Available Surface albedo is one of the key controlling geophysical parameters in the surface energy budget studies, and its temporal and spatial variation is closely related to the global climate change and regional weather system due to the albedo feedback mechanism. As an efficient tool for monitoring the surfaces of the Earth, remote sensing is widely used for deriving long-term surface broadband albedo with various geostationary and polar-orbit satellite platforms in recent decades. Moreover, the algorithms for estimating surface broadband albedo from satellite observations, including narrow-to-broadband conversions, bidirectional reflectance distribution function (BRDF angular modeling, direct-estimation algorithm and the algorithms for estimating albedo from geostationary satellite data, are developed and improved. In this paper, we present a comprehensive literature review on algorithms and products for mapping surface broadband albedo with satellite observations and provide a discussion of different algorithms and products in a historical perspective based on citation analysis of the published literature. This paper shows that the observation technologies and accuracy requirement of applications are important, and long-term, global fully-covered (including land, ocean, and sea-ice surfaces, gap-free, surface broadband albedo products with higher spatial and temporal resolution are required for climate change, surface energy budget, and hydrological studies.
Bulyha, Alena; Heitzinger, Clemens
2011-01-01
In this work, a Monte-Carlo algorithm in the constant-voltage ensemble for the calculation of 3d charge concentrations at charged surfaces functionalized with biomolecules is presented. The motivation for this work is the theoretical understanding
A new algorithm for the integration of exponential and logarithmic functions
Rothstein, M.
1977-01-01
An algorithm for symbolic integration of functions built up from the rational functions by repeatedly applying either the exponential or logarithm functions is discussed. This algorithm does not require polynomial factorization nor partial fraction decomposition and requires solutions of linear systems with only a small number of unknowns. It is proven that if this algorithm is applied to rational functions over the integers, a computing time bound for the algorithm can be obtained which is a polynomial in a bound on the integer length of the coefficients, and in the degrees of the numerator and denominator of the rational function involved.
Assessment of available integration algorithms for initial value ordinary differential equations
International Nuclear Information System (INIS)
Carver, M.B.; Stewart, D.G.
1979-11-01
There exists an extremely large number of algorithms designed for the ordinary differential equation initial value problem. The integration is normally done by a finite sum at time intervals which are chosen dynamically to satisfy an imposed error tolerance. This report describes the basic logistics of the integration process, identifies common areas of difficulty, and establishes a comprehensive test profile for integration algorithms. A number of algorithms are described, and selected published subroutines are evaluated using the test profile. It concludes that an effective library for general use need have only two such routines. The two selected are versions of the well-known Gear and Runge-Kutta-Fehlberg algorithms. Full documentation and listings are included. (auth)
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
AntStar: Enhancing Optimization Problems by Integrating an Ant System and A⁎ Algorithm
Directory of Open Access Journals (Sweden)
Mohammed Faisal
2016-01-01
Full Text Available Recently, nature-inspired techniques have become valuable to many intelligent systems in different fields of technology and science. Among these techniques, Ant Systems (AS have become a valuable technique for intelligent systems in different fields. AS is a computational system inspired by the foraging behavior of ants and intended to solve practical optimization problems. In this paper, we introduce the AntStar algorithm, which is swarm intelligence based. AntStar enhances the optimization and performance of an AS by integrating the AS and A⁎ algorithm. Applying the AntStar algorithm to the single-source shortest-path problem has been done to ensure the efficiency of the proposed AntStar algorithm. The experimental result of the proposed algorithm illustrated the robustness and accuracy of the AntStar algorithm.
From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth
International Nuclear Information System (INIS)
Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.
2000-01-01
We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society
International Nuclear Information System (INIS)
Chang, P.; Lee, S.Y.; Yan, Y.T.
2006-01-01
A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders
International Nuclear Information System (INIS)
Chang, P
2004-01-01
A differential algebraic integration algorithm is developed for symplectic mapping through a three-dimensional (3-D) magnetic field. The self-consistent reference orbit in phase space is obtained by making a canonical transformation to eliminate the linear part of the Hamiltonian. Transfer maps from the entrance to the exit of any 3-D magnetic field are then obtained through slice-by-slice symplectic integration. The particle phase-space coordinates are advanced by using the integrable polynomial procedure. This algorithm is a powerful tool to attain nonlinear maps for insertion devices in synchrotron light source or complicated magnetic field in the interaction region in high energy colliders
Energy Technology Data Exchange (ETDEWEB)
Mărăscu, V.; Dinescu, G. [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania); Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Chiţescu, I. [Faculty of Mathematics and Computer Science, University of Bucharest, 14 Academiei Street, Bucharest (Romania); Barna, V. [Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B., E-mail: mitub@infim.ro [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania)
2016-03-25
In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.
International Nuclear Information System (INIS)
Mărăscu, V.; Dinescu, G.; Chiţescu, I.; Barna, V.; Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B.
2016-01-01
In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.
Braking distance algorithm for autonomous cars using road surface recognition
Kavitha, C.; Ashok, B.; Nanthagopal, K.; Desai, Rohan; Rastogi, Nisha; Shetty, Siddhanth
2017-11-01
India is yet to accept semi/fully - autonomous cars and one of the reasons, was loss of control on bad roads. For a better handling on these roads we require advanced braking and that can be done by adapting electronics into the conventional type of braking. In Recent years, the automation in braking system led us to various benefits like traction control system, anti-lock braking system etc. This research work describes and experiments the method for recognizing road surface profile and calculating braking distance. An ultra-sonic surface recognition sensor, mounted underneath the car will send a high frequency wave on to the road surface, which is received by a receiver with in the sensor, it calculates the time taken for the wave to rebound and thus calculates the distance from the point where sensor is mounted. A displacement graph will be plotted based on the output of the sensor. A relationship can be derived between the displacement plot and roughness index through which the friction coefficient can be derived in Matlab for continuous calculation throughout the distance travelled. Since it is a non-contact type of profiling, it is non-destructive. The friction coefficient values received in real-time is used to calculate optimum braking distance. This system, when installed on normal cars can also be used to create a database of road surfaces, especially in cities, which can be shared with other cars. This will help in navigation as well as making the cars more efficient.
An algorithm to retrieve Land Surface Temperature using Landsat-8 ...
African Journals Online (AJOL)
Ayodeji Ogunode;Mulemwa Akombelwa
The results show temperature variation over a long period of time can be ... Remote sensing of LST using infrared radiation gives the average surface temperature of the scene ... advantage over previous Landsat series. ..... Li, F., Jackson, T. J., Kustas, W. P., Schmugge, T. J., French, A. N., Cosh, M. H. & Bindlish, R. 2004.
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-01-01
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361
A Novel Integrated Algorithm for Wind Vector Retrieval from Conically Scanning Scatterometers
Directory of Open Access Journals (Sweden)
Xuetong Xie
2013-11-01
Full Text Available Due to the lower efficiency and the larger wind direction error of traditional algorithms, a novel integrated wind retrieval algorithm is proposed for conically scanning scatterometers. The proposed algorithm has the dual advantages of less computational cost and higher wind direction retrieval accuracy by integrating the wind speed standard deviation (WSSD algorithm and the wind direction interval retrieval (DIR algorithm. It adopts wind speed standard deviation as a criterion for searching possible wind vector solutions and retrieving a potential wind direction interval based on the change rate of the wind speed standard deviation. Moreover, a modified three-step ambiguity removal method is designed to let more wind directions be selected in the process of nudging and filtering. The performance of the new algorithm is illustrated by retrieval experiments using 300 orbits of SeaWinds/QuikSCAT L2A data (backscatter coefficients at 25 km resolution and co-located buoy data. Experimental results indicate that the new algorithm can evidently enhance the wind direction retrieval accuracy, especially in the nadir region. In comparison with the SeaWinds L2B Version 2 25 km selected wind product (retrieved wind fields, an improvement of 5.1° in wind direction retrieval can be made by the new algorithm for that region.
Integrated Management of Residential Energy Resources: Models, Algorithms and Application
Soares, Ana Raquel Gonçalves
2016-01-01
Tese de doutoramento em Sistemas Sustentáveis de Energia, apresentada ao Departamento de Engenharia Mecânica da Faculdade de Ciências e Tecnologia da Universidade de Coimbra The gradual development of electricity networks into smart(er) grids is expected to provide the technological infrastructure allowing the deployment of new tariff structures and creating the enabling environment for the integrated management of energy resources. The suitable stimuli, for example induced by dynamic tari...
Kitazono, Jun; Kanai, Ryota; Oizumi, Masafumi
2018-03-01
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ($\\Phi$) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of $\\Phi$ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of $\\Phi$ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of $\\Phi$ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure $\\Phi$ in large systems within a practical amount of time.
Bunai, Tasya; Rokhmatuloh; Wibowo, Adi
2018-05-01
In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.
Synthetic aperture integration (SAI) algorithm for SAR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W; Beer, N. Reginald
2013-07-09
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
Energy Technology Data Exchange (ETDEWEB)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A., E-mail: mcewen.24@osu.edu, E-mail: fang.307@osu.edu, E-mail: hirata.10@osu.edu, E-mail: blazek@berkeley.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.
Hou, Ying-Yu; He, Yan-Bo; Wang, Jian-Lin; Tian, Guo-Liang
2009-10-01
Based on the time series 10-day composite NOAA Pathfinder AVHRR Land (PAL) dataset (8 km x 8 km), and by using land surface energy balance equation and "VI-Ts" (vegetation index-land surface temperature) method, a new algorithm of land surface evapotranspiration (ET) was constructed. This new algorithm did not need the support from meteorological observation data, and all of its parameters and variables were directly inversed or derived from remote sensing data. A widely accepted ET model of remote sensing, i. e., SEBS model, was chosen to validate the new algorithm. The validation test showed that both the ET and its seasonal variation trend estimated by SEBS model and our new algorithm accorded well, suggesting that the ET estimated from the new algorithm was reliable, being able to reflect the actual land surface ET. The new ET algorithm of remote sensing was practical and operational, which offered a new approach to study the spatiotemporal variation of ET in continental scale and global scale based on the long-term time series satellite remote sensing images.
Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface
International Nuclear Information System (INIS)
Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho
2005-01-01
While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm
Energy Technology Data Exchange (ETDEWEB)
Ngonkham, S. [Khonkaen Univ., Amphur Muang (Thailand). Dept. of Electrical Engineering; Buasri, P. [Khonkaen Univ., Amphur Muang (Thailand). Embed System Research Group
2009-03-11
A harmony search (HS) algorithm was used to optimize economic dispatch (ED) in a wind energy conversion system (WECS) for power system integration. The HS algorithm was based on a stochastic random search method. System costs for the WECS system were estimated in relation to average wind speeds. The HS algorithm was implemented to optimize the ED with a simple programming procedure. The study showed that the initial parameters must be carefully selected to ensure the accuracy of the HS algorithm. The algorithm demonstrated that total costs of the WECS system were higher than costs associated with energy efficiency procedures that reduced the same amount of greenhouse gas (GHG) emissions. 7 refs,. 10 tabs., 16 figs.
Velocity control of servo systems using an integral retarded algorithm.
Ramírez, Adrián; Garrido, Rubén; Mondié, Sabine
2015-09-01
This paper presents a design technique for the delay-based controller called Integral Retarded (IR), and its applications to velocity control of servo systems. Using spectral analysis, the technique yields a tuning strategy for the IR by assigning a triple real dominant root for the closed-loop system. This result ultimately guarantees a desired exponential decay rate σ(d) while achieving the IR tuning as explicit function of σ(d) and system parameters. The intentional introduction of delay allows using noisy velocity measurements without additional filtering. The structure of the controller is also able to avoid velocity measurements by using instead position information. The IR is compared to a classical PI, both tested in a laboratory prototype. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.; Eivazy, H.
2009-01-01
This paper presents an integrated data envelopment analysis (DEA)-corrected ordinary least squares (COLS)-stochastic frontier analysis (SFA)-principal component analysis (PCA)-numerical taxonomy (NT) algorithm for performance assessment, optimization and policy making of electricity distribution units. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study proposes an integrated flexible approach to measure the rank and choose the best version of the DEA method for optimization and policy making purposes. It covers both static and dynamic aspects of information environment due to involvement of SFA which is finally compared with the best DEA model through the Spearman correlation technique. The integrated approach would yield in improved ranking and optimization of electricity distribution systems. To illustrate the usability and reliability of the proposed algorithm, 38 electricity distribution units in Iran have been considered, ranked and optimized by the proposed algorithm of this study.
An API for Integrating Spatial Context Models with Spatial Reasoning Algorithms
DEFF Research Database (Denmark)
Kjærgaard, Mikkel Baun
2006-01-01
The integration of context-aware applications with spatial context models is often done using a common query language. However, algorithms that estimate and reason about spatial context information can benefit from a tighter integration. An object-oriented API makes such integration possible...... and can help reduce the complexity of algorithms making them easier to maintain and develop. This paper propose an object-oriented API for context models of the physical environment and extensions to a location modeling approach called geometric space trees for it to provide adequate support for location...... modeling. The utility of the API is evaluated in several real-world cases from an indoor location system, and spans several types of spatial reasoning algorithms....
DEFF Research Database (Denmark)
Henriksen, Lars
1996-01-01
The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...
Directory of Open Access Journals (Sweden)
Muhammad Farhan Ausaf
2015-12-01
Full Text Available Process planning and scheduling are two important components of a manufacturing setup. It is important to integrate them to achieve better global optimality and improved system performance. To find optimal solutions for integrated process planning and scheduling (IPPS problem, numerous algorithm-based approaches exist. Most of these approaches try to use existing meta-heuristic algorithms for solving the IPPS problem. Although these approaches have been shown to be effective in optimizing the IPPS problem, there is still room for improvement in terms of quality of solution and algorithm efficiency, especially for more complicated problems. Dispatching rules have been successfully utilized for solving complicated scheduling problems, but haven’t been considered extensively for the IPPS problem. This approach incorporates dispatching rules with the concept of prioritizing jobs, in an algorithm called priority-based heuristic algorithm (PBHA. PBHA tries to establish job and machine priority for selecting operations. Priority assignment and a set of dispatching rules are simultaneously used to generate both the process plans and schedules for all jobs and machines. The algorithm was tested for a series of benchmark problems. The proposed algorithm was able to achieve superior results for most complex problems presented in recent literature while utilizing lesser computational resources.
Lining seam elimination algorithm and surface crack detection in concrete tunnel lining
Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling
2016-11-01
Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Integrated Surface/subsurface flow modeling in PFLOTRAN
Energy Technology Data Exchange (ETDEWEB)
Painter, Scott L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-10-01
Understanding soil water, groundwater, and shallow surface water dynamics as an integrated hydrological system is critical for understanding the Earth’s critical zone, the thin outer layer at our planet’s surface where vegetation, soil, rock, and gases interact to regulate the environment. Computational tools that take this view of soil moisture and shallow surface flows as a single integrated system are typically referred to as integrated surface/subsurface hydrology models. We extend the open-source, highly parallel, subsurface flow and reactive transport simulator PFLOTRAN to accommodate surface flows. In contrast to most previous implementations, we do not represent a distinct surface system. Instead, the vertical gradient in hydraulic head at the land surface is neglected, which allows the surface flow system to be eliminated and incorporated directly into the subsurface system. This tight coupling approach leads to a robust capability and also greatly simplifies implementation in existing subsurface simulators such as PFLOTRAN. Successful comparisons to independent numerical solutions build confidence in the approximation and implementation. Example simulations of the Walker Branch and East Fork Poplar Creek watersheds near Oak Ridge, Tennessee demonstrate the robustness of the approach in geometrically complex applications. The lack of a robust integrated surface/subsurface hydrology capability had been a barrier to PFLOTRAN’s use in critical zone studies. This work addresses that capability gap, thus enabling PFLOTRAN as a community platform for building integrated models of the critical zone.
Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA
Meyer, Christoph
2018-01-01
The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.
Evolutionary algorithms approach for integrated bioenergy supply chains optimization
International Nuclear Information System (INIS)
Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji
2009-01-01
In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)
2014-09-15
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.
International Nuclear Information System (INIS)
Lee, Kyun Ho; Kim, Ki Wan
2014-01-01
The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem
Research on the target coverage algorithms for 3D curved surface
International Nuclear Information System (INIS)
Sun, Shunyuan; Sun, Li; Chen, Shu
2016-01-01
To solve the target covering problems in three-dimensional space, putting forward a deployment strategies of the target points innovatively, and referencing to the differential evolution (DE) algorithm to optimize the location coordinates of the sensor nodes to realize coverage of all the target points in 3-D surface with minimal sensor nodes. Firstly, building the three-dimensional perception model of sensor nodes, and putting forward to the blind area existing in the process of the sensor nodes sensing the target points in 3-D surface innovatively, then proving the feasibility of solving the target coverage problems in 3-D surface with DE algorithm theoretically, and reflecting the fault tolerance of the algorithm.
Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.
Wang, Jiao; Deng, Zhiqiang
2017-06-01
A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.
A free surface algorithm in the N3S finite element code for turbulent flows
International Nuclear Information System (INIS)
Nitrosso, B.; Pot, G.; Abbes, B.; Bidot, T.
1995-08-01
In this paper, we present a free surface algorithm which was implemented in the N3S code. Free surfaces are represented by marker particles which move through a mesh. It is assumed that the free surface is located inside each element that contains markers and surrounded by at least one element with no marker inside. The mesh is then locally adjusted in order to coincide with the free surface which is well defined by the forefront marker particles. After describing the governing equations and the N3S solving methods, we present the free surface algorithm. Results obtained for two-dimensional and three-dimensional industrial problems of mould filling are presented. (authors). 5 refs., 2 figs
Algorithms for singularities and real structures of weak Del Pezzo surfaces
Lubbes, Niels
2014-08-01
In this paper, we consider the classification of singularities [P. Du Val, On isolated singularities of surfaces which do not affect the conditions of adjunction. I, II, III, Proc. Camb. Philos. Soc. 30 (1934) 453-491] and real structures [C. T. C. Wall, Real forms of smooth del Pezzo surfaces, J. Reine Angew. Math. 1987(375/376) (1987) 47-66, ISSN 0075-4102] of weak Del Pezzo surfaces from an algorithmic point of view. It is well-known that the singularities of weak Del Pezzo surfaces correspond to root subsystems. We present an algorithm which computes the classification of these root subsystems. We represent equivalence classes of root subsystems by unique labels. These labels allow us to construct examples of weak Del Pezzo surfaces with the corresponding singularity configuration. Equivalence classes of real structures of weak Del Pezzo surfaces are also represented by root subsystems. We present an algorithm which computes the classification of real structures. This leads to an alternative proof of the known classification for Del Pezzo surfaces and extends this classification to singular weak Del Pezzo surfaces. As an application we classify families of real conics on cyclides. © World Scientific Publishing Company.
Luo, G. Y.; Osypiw, D.; Irle, M.
2003-05-01
The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.
International Nuclear Information System (INIS)
Contreras-Astorga, Alonso; Schulze-Halberg, Axel
2015-01-01
We construct a relationship between integral and differential representation of second-order Jordan chains. Conditions to obtain regular potentials through the confluent supersymmetry algorithm when working with the differential representation are obtained using this relationship. Furthermore, it is used to find normalization constants of wave functions of quantum systems that feature energy-dependent potentials. Additionally, this relationship is used to express certain integrals involving functions that are solution of Schrödinger equations through derivatives. (paper)
Integral computer-generated hologram via a modified Gerchberg-Saxton algorithm
International Nuclear Information System (INIS)
Wu, Pei-Jung; Lin, Bor-Shyh; Chen, Chien-Yue; Huang, Guan-Syun; Deng, Qing-Long; Chang, Hsuan T
2015-01-01
An integral computer-generated hologram, which modulates the phase function of an object based on a modified Gerchberg–Saxton algorithm and compiles a digital cryptographic diagram with phase synthesis, is proposed in this study. When the diagram completes position demultiplexing decipherment, multi-angle elemental images can be reconstructed. Furthermore, an integral CGH with a depth of 225 mm and a visual angle of ±11° is projected through the lens array. (paper)
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Directory of Open Access Journals (Sweden)
Shoaib Ehsan
2015-07-01
Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Energy Technology Data Exchange (ETDEWEB)
Krummenacher, P.; Renaud, B.; Marechal, F.; Favrat, D.
2001-07-01
This report presents a new methodological approach for the optimal design of energy-integrated batch processes. The main emphasis is put on indirect and, to some extend, on direct heat exchange networks with the possibility of introducing closed or open storage systems. The study demonstrates the feasibility of optimising with genetic algorithms while highlighting the pros and cons of this type of approach. The study shows that the resolution of such problems should preferably be done in several steps to better target the expected solutions. Demonstration is made that in spite of relatively large computer times (on PCs) the use of genetic algorithm allows the consideration of both continuous decision variables (size, operational rating of equipment, etc.) and integer variables (related to the structure at design and during operation). Comparison of two optimisation strategies is shown with a preference for a two-steps optimisation scheme. One of the strengths of genetic algorithms is the capacity to accommodate heuristic rules, which can be introduced in the model. However, a rigorous modelling strategy is advocated to improve robustness and adequate coding of the decision variables. The practical aspects of the research work are converted into a software developed with MATLAB to solve the energy integration of batch processes with a reasonable number of closed or open stores. This software includes the model of superstructures, including the heat exchangers and the storage alternatives, as well as the link to the Struggle algorithm developed at MIT via a dedicated new interface. The package also includes a user-friendly pre-processing using EXCEL, which is to facilitate to application to other similar industrial problems. These software developments have been validated both on an academic and on an industrial type of problems. (author)
Optimization of Grillages Using Genetic Algorithms for Integrating Matlab and Fortran Environments
Directory of Open Access Journals (Sweden)
Darius Mačiūnas
2012-12-01
Full Text Available The purpose of the paper is to present technology applied for the global optimization of grillage-type pile foundations (further grillages. The goal of optimization is to obtain the optimal layout of pile placement in the grillages. The problem can be categorized as a topology optimization problem. The objective function is comprised of maximum reactive force emerging in a pile. The reactive force is minimized during the procedure of optimization during which variables enclose the positions of piles beneath connecting beams. Reactive forces in all piles are computed utilizing an original algorithm implemented in the Fortran programming language. The algorithm is integrated into the MatLab environment where the optimization procedure is executed utilizing a genetic algorithm. The article also describes technology enabling the integration of MatLab and Fortran environments. The authors seek to evaluate the quality of a solution to the problem analyzing experimental results obtained applying the proposed technology.
Directory of Open Access Journals (Sweden)
Ravil’ Kudermetov
2018-02-01
Full Text Available Nowadays multi-core processors are installed almost in each modern workstation, but the question of these computational resources effective utilization is still a topical one. In this paper the four-point block one-step integration method is considered, the parallel algorithm of this method is proposed and the Java programmatic implementation of this algorithm is discussed. The effectiveness of the proposed algorithm is demonstrated by way of spacecraft attitude motion simulation. The results of this work can be used for practical simulation of dynamic systems that are described by ordinary differential equations. The results are also applicable to the development and debugging of computer programs that integrate the dynamic and kinematic equations of the angular motion of a rigid body.
Optimization of Grillages Using Genetic Algorithms for Integrating Matlab and Fortran Environments
Directory of Open Access Journals (Sweden)
Darius Mačiūnas
2013-02-01
Full Text Available The purpose of the paper is to present technology applied for the global optimization of grillage-type pile foundations (further grillages. The goal of optimization is to obtain the optimal layout of pile placement in the grillages. The problem can be categorized as a topology optimization problem. The objective function is comprised of maximum reactive force emerging in a pile. The reactive force is minimized during the procedure of optimization during which variables enclose the positions of piles beneath connecting beams. Reactive forces in all piles are computed utilizing an original algorithm implemented in the Fortran programming language. The algorithm is integrated into the MatLab environment where the optimization procedure is executed utilizing a genetic algorithm. The article also describes technology enabling the integration of MatLab and Fortran environments. The authors seek to evaluate the quality of a solution to the problem analyzing experimental results obtained applying the proposed technology.
Directory of Open Access Journals (Sweden)
Amir Masoud Rahimi
Full Text Available Abstract This paper proposed an integrated algorithm of neuro-fuzzy techniques to examine the complex impact of socio-technical influencing factors on road fatalities. The proposed algorithm could handle complexity, non-linearity and fuzziness in the modeling environment due to its mechanism. The Neuro-fuzzy algorithm for determination of the potential influencing factors on road fatalities consisted of two phases. In the first phase, intelligent techniques are compared for their improved accuracy in predicting fatality rate with respect to some socio-technical influencing factors. Then in the second phase, sensitivity analysis is performed to calculate the pure effect on fatality rate of the potential influencing factors. The applicability and usefulness of the proposed algorithm is illustrated using the data in Iran provincial road transportation systems in the time period 2012-2014. Results show that road design improvement, number of trips, and number of passengers are the most influencing factors on provincial road fatality rate.
DEFF Research Database (Denmark)
Nielsen, Martin Bjerre; Krenk, Steen
2012-01-01
A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...
A simplicial algorithm for testing the integral properties of polytopes : A revision
Yang, Z.F.
1994-01-01
Given an arbitrary polytope P in the n-dimensional Euclidean space R n , the question is to determine whether P contains an integral point or not. We propose a simplicial algorithm to answer this question based on a specifc integer labeling rule and a specific triangulation of R n . Starting from an
A Hierarchical Algorithm for Integrated Scheduling and Control With Applications to Power Systems
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dinesen, Peter Juhler; Jørgensen, John Bagterp
2016-01-01
The contribution of this paper is a hierarchical algorithm for integrated scheduling and control via model predictive control of hybrid systems. The controlled system is a linear system composed of continuous control, state, and output variables. Binary variables occur as scheduling decisions in ...
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Surface free energy for systems with integrable boundary conditions
International Nuclear Information System (INIS)
Goehmann, Frank; Bortz, Michael; Frahm, Holger
2005-01-01
The surface free energy is the difference between the free energies for a system with open boundary conditions and the same system with periodic boundary conditions. We use the quantum transfer matrix formalism to express the surface free energy in the thermodynamic limit of systems with integrable boundary conditions as a matrix element of certain projection operators. Specializing to the XXZ spin-1/2 chain we introduce a novel 'finite temperature boundary operator' which characterizes the thermodynamical properties of surfaces related to integrable boundary conditions
Clements, Logan W.; Chapman, William C.; Dawant, Benoit M.; Galloway, Robert L.; Miga, Michael I.
2008-01-01
A successful surface-based image-to-physical space registration in image-guided liver surgery (IGLS) is critical to provide reliable guidance information to surgeons and pertinent surface displacement data for use in deformation correction algorithms. The current protocol used to perform the image-to-physical space registration involves an initial pose estimation provided by a point based registration of anatomical landmarks identifiable in both the preoperative tomograms and the intraoperati...
An integrated algorithm for hypersonic fluid-thermal-structural numerical simulation
Li, Jia-Wei; Wang, Jiang-Feng
2018-05-01
In this paper, a fluid-structural-thermal integrated method is presented based on finite volume method. A unified integral equations system is developed as the control equations for physical process of aero-heating and structural heat transfer. The whole physical field is discretized by using an up-wind finite volume method. To demonstrate its capability, the numerical simulation of Mach 6.47 flow over stainless steel cylinder shows a good agreement with measured values, and this method dynamically simulates the objective physical processes. Thus, the integrated algorithm proves to be efficient and reliable.
A comparison of two open source LiDAR surface classification algorithms
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...
A comparison of two open source LiDAR surface classification algorithms
Wade T. Tinkham; Hongyu Huang; Alistair M.S. Smith; Rupesh Shrestha; Michael J. Falkowski; Andrew T. Hudak; Timothy E. Link; Nancy F. Glenn; Danny G. Marks
2011-01-01
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results....
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Springback Simulation and Tool Surface Compensation Algorithm for Sheet Metal Forming
International Nuclear Information System (INIS)
Shen Guozhe; Hu Ping; Zhang Xiangkui; Chen Xiaobin; Li Xiaoda
2005-01-01
Springback is an unquenchable forming defect in the sheet metal forming process. How to calculate springback accurately is a big challenge for a lot of FEA software. Springback compensation makes the stamped final part accordant with the designed part shape by modifying tool surface, which depends on the accurate springback amount. How ever, the meshing data based on numerical simulation is expressed by nodes and elements, such data can not be supplied directly to tool surface CAD data. In this paper, a tool surface compensation algorithm based on numerical simulation technique of springback process is proposed in which the independently developed dynamic explicit springback algorithm (DESA) is used to simulate springback amount. When doing the tool surface compensation, the springback amount of the projected point can be obtained by interpolation of the springback amount of the projected element nodes. So the modified values of tool surface can be calculated reversely. After repeating the springback and compensation calculations for 1∼3 times, the reasonable tool surface mesh is gained. Finally, the FEM data on the compensated tool surface is fitted into the surface by CAD modeling software. The examination of a real industrial part shows the validity of the present method
Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting
Directory of Open Access Journals (Sweden)
ZHU Xiaoxiao
2018-02-01
Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.
Laser microtexturing of implant surfaces for enhanced tissue integration
Energy Technology Data Exchange (ETDEWEB)
Ricci, J.L. [Univ. of Medicine and Dentistry of New Jersey, Newark, NJ (United States). Dept. of Orthodontics; Alexander, H. [Orthogen Corp., Springfield, NJ (United States)
2001-07-01
The success or failure of bone and soft tissue-fixed medical devices, such as dental and orthopaedic implants, depends on a complex combination of biological and mechanical factors. These factors are intimately associated with the interface between the implant surface and the surrounding tissue, and are largely determined by the composition, surface chemistry, and surface microgeometry of the implant. The relative contributions of these factors are difficult to assess. This study addresses the contribution of surface microtexture, on a controlled level, to tissue integration. (orig.)
A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements
International Nuclear Information System (INIS)
Yuan, Y B; Piao, W Y; Xu, J B
2007-01-01
The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements
A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements
Yuan, Y. B.; Piao, W. Y.; Xu, J. B.
2007-07-01
The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements.
Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms
Directory of Open Access Journals (Sweden)
Milton Isaya Ndossi
2016-12-01
Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.
Çaydaş, Ulaş; Çelik, Mahmut
The present work is focused on the optimization of process parameters in cylindrical surface grinding of AISI 1050 steel with grooved wheels. Response surface methodology (RSM) and genetic algorithm (GA) techniques were merged to optimize the input variable parameters of grinding. The revolution speed of workpiece, depth of cut and number of grooves on the wheel were changed to explore their experimental effects on the surface roughness of machined bars. The mathematical models were established between the input parameters and response by using RSM. Then, the developed RSM model was used as objective functions on GA to optimize the process parameters.
International Nuclear Information System (INIS)
Snyder, Abigail C.; Jiao, Yu
2010-01-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
International Nuclear Information System (INIS)
Azadeh, A.; Tarverdian, S.
2007-01-01
This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on genetic algorithm (GA), computer simulation and design of experiments using stochastic procedures. First, time-series model is developed as a benchmark for GA and simulation. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The GA and simulated-based GA models are then developed by the selected time-series model. Therefore, there are four treatments to be considered in analysis of variance (ANOVA) which are actual data, time series, GA and simulated-based GA. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan Multiple Range Test (DMRT) method of paired comparison is used to select the optimum model, which could be time series, GA or simulated-based GA. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best-fit GA model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that GA always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to the proposed algorithm
Integrated system of production information processing for surface mines
Energy Technology Data Exchange (ETDEWEB)
Li, K.; Wang, S.; Zeng, Z.; Wei, J.; Ren, Z. [China University of Mining and Technology, Xuzhou (China). Dept of Mining Engineering
2000-09-01
Based on the concept of geological statistic, mathematical program, condition simulation, system engineering, and the features and duties of each main department in surface mine production, an integrated system for surface mine production information was studied systematically and developed by using the technology of data warehousing, CAD, object-oriented and system integration, which leads to the systematizing and automating of the information management, data processing, optimization computing and plotting. In this paper, its overall object, system design, structure and functions and some key techniques were described. 2 refs., 3 figs.
Surfaces immersed in Lie algebras associated with elliptic integrals
International Nuclear Information System (INIS)
Grundland, A M; Post, S
2012-01-01
The objective of this work is to adapt the Fokas–Gel’fand immersion formula to ordinary differential equations written in the Lax representation. The formalism of generalized vector fields and their prolongation structure is employed to establish necessary and sufficient conditions for the existence and integration of immersion functions for surfaces in Lie algebras. As an example, a class of second-order, integrable, ordinary differential equations is considered and the most general solutions for the wavefunctions of the linear spectral problem are found. Several explicit examples of surfaces associated with Jacobian and P-Weierstrass elliptic functions are presented. (paper)
An Extended Genetic Algorithm for Distributed Integration of Fuzzy Process Planning and Scheduling
Directory of Open Access Journals (Sweden)
Shuai Zhang
2016-01-01
Full Text Available The distributed integration of process planning and scheduling (DIPPS aims to simultaneously arrange the two most important manufacturing stages, process planning and scheduling, in a distributed manufacturing environment. Meanwhile, considering its advantage corresponding to actual situation, the triangle fuzzy number (TFN is adopted in DIPPS to represent the machine processing and transportation time. In order to solve this problem and obtain the optimal or near-optimal solution, an extended genetic algorithm (EGA with innovative three-class encoding method, improved crossover, and mutation strategies is proposed. Furthermore, a local enhancement strategy featuring machine replacement and order exchange is also added to strengthen the local search capability on the basic process of genetic algorithm. Through the verification of experiment, EGA achieves satisfactory results all in a very short period of time and demonstrates its powerful performance in dealing with the distributed integration of fuzzy process planning and scheduling (DIFPPS.
Directory of Open Access Journals (Sweden)
Hossein Erfani
2009-07-01
Full Text Available Imagine you have traveled to an unfamiliar city. Before you start your daily tour around the city, you need to know a good route. In Network Theory (NT, this is the traveling salesman problem (TSP. A dynamic programming algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in AT. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated TSP algorithm with AI knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help TSP algorithm in finding a solution. This approach dramatically reduces the computation time required for minimum tour finding.
A general rough-surface inversion algorithm: Theory and application to SAR data
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
International Nuclear Information System (INIS)
Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling
2011-01-01
To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted
Integrable systems twistors, loop groups, and Riemann surfaces
Hitchin, NJ; Ward, RS
2013-01-01
This textbook is designed to give graduate students an understanding of integrable systems via the study of Riemann surfaces, loop groups, and twistors. The book has its origins in a series of lecture courses given by the authors, all of whom are internationally known mathematicians and renowned expositors. It is written in an accessible and informal style, and fills a gap in the existing literature. The introduction by Nigel Hitchin addresses the meaning of integrability: how do werecognize an integrable system? His own contribution then develops connections with algebraic geometry, and inclu
International Nuclear Information System (INIS)
Peirce, A; Rochinha, F
2012-01-01
We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)
SARDA: An Integrated Concept for Airport Surface Operations Management
Gupta, Gautam; Hoang, Ty; Jung, Yoon Chul
2013-01-01
The Spot and Runway Departure Advisor (SARDA) is an integrated decision support tool for airlines and air traffic control tower enabling surface collaborative decision making (CDM) and departure metering in order to enhance efficiency of surface operations at congested airports. The presentation describes the concept and architecture of the SARDA as a CDM tool, and the results from a human-in-the-loop simulation of the tool conducted in 2012 at the FutureFlight Central, the tower simulation facility. Also, presented is the current activities and future plan for SARDA development. The presentation was given at the meeting with the FAA senior advisor of the Surface Operations Office.
Real-time intelligent pattern recognition algorithm for surface EMG signals
Directory of Open Access Journals (Sweden)
Jahed Mehran
2007-12-01
Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal
Computation of Surface Integrals of Curl Vector Fields
Hu, Chenglie
2007-01-01
This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…
A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests
Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars
2015-09-01
The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal
An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.
Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H
2018-06-01
We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live
Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza
2012-06-01
It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes
Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets
Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.
2018-04-01
Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.
Nonlinear Filtering with IMM Algorithm for Ultra-Tight GPS/INS Integration
Directory of Open Access Journals (Sweden)
Dah-Jing Jwo
2013-05-01
Full Text Available Abstract This paper conducts a performance evaluation for the ultra-tight integration of a Global positioning system (GPS and an inertial navigation system (INS, using nonlinear filtering approaches with an interacting multiple model (IMM algorithm. An ultra-tight GPS/INS architecture involves the integration of in-phase and quadrature components from the correlator of a GPS receiver with INS data. An unscented Kalman filter (UKF, which employs a set of sigma points by deterministic sampling, avoids the error caused by linearization as in an extended Kalman filter (EKF. Based on the filter structural adaptation for describing various dynamic behaviours, the IMM nonlinear filtering provides an alternative for designing the adaptive filter in the ultra-tight GPS/INS integration. The use of IMM enables tuning of an appropriate value for the process of noise covariance so as to maintain good estimation accuracy and tracking capability. Two examples are provided to illustrate the effectiveness of the design and demonstrate the effective improvement in navigation estimation accuracy. A performance comparison among various filtering methods for ultra-tight integration of GPS and INS is also presented. The IMM based nonlinear filtering approach demonstrates the effectiveness of the algorithm for improved positioning performance.
Integral equation models for image restoration: high accuracy methods and fast algorithms
International Nuclear Information System (INIS)
Lu, Yao; Shen, Lixin; Xu, Yuesheng
2010-01-01
Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images
Directory of Open Access Journals (Sweden)
Christley Scott
2010-08-01
Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a
Directory of Open Access Journals (Sweden)
Zunjian Bian
2017-07-01
Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.
Schmalzl, JöRg; Loddoch, Alexander
2003-09-01
We present a new method for investigating the transport of an active chemical component in a convective flow. We apply a three-dimensional front tracking method using a triangular mesh. For the refinement of the mesh we use subdivision surfaces which have been developed over the last decade primarily in the field of computer graphics. We present two different subdivision schemes and discuss their applicability to problems related to fluid dynamics. For adaptive refinement we propose a weight function based on the length of triangle edge and the sum of the angles of the triangle formed with neighboring triangles. In order to remove excess triangles we apply an adaptive surface simplification method based on quadric error metrics. We test these schemes by advecting a blob of passive material in a steady state flow in which the total volume is well preserved over a long time. Since for time-dependent flows the number of triangles may increase exponentially in time we propose the use of a subdivision scheme with diffusive properties in order to remove the small scale features of the chemical field. By doing so we are able to follow the evolution of a heavy chemical component in a vigorously convecting field. This calculation is aimed at the fate of a heavy layer at the Earth's core-mantle boundary. Since the viscosity variation with temperature is of key importance we also present a calculation with a strongly temperature-dependent viscosity.
Parareal algorithms with local time-integrators for time fractional differential equations
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
A parallel algorithm for solving the integral form of the discrete ordinates equations
International Nuclear Information System (INIS)
Zerr, R. J.; Azmy, Y. Y.
2009-01-01
The integral form of the discrete ordinates equations involves a system of equations that has a large, dense coefficient matrix. The serial construction methodology is presented and properties that affect the execution times to construct and solve the system are evaluated. Two approaches for massively parallel implementation of the solution algorithm are proposed and the current results of one of these are presented. The system of equations May be solved using two parallel solvers-block Jacobi and conjugate gradient. Results indicate that both methods can reduce overall wall-clock time for execution. The conjugate gradient solver exhibits better performance to compete with the traditional source iteration technique in terms of execution time and scalability. The parallel conjugate gradient method is synchronous, hence it does not increase the number of iterations for convergence compared to serial execution, and the efficiency of the algorithm demonstrates an apparent asymptotic decline. (authors)
An algorithm for detecting Trichodesmium surface blooms in the South Western Tropical Pacific
Directory of Open Access Journals (Sweden)
Y. Dandonneau
2011-12-01
Full Text Available Trichodesmium, a major colonial cyanobacterial nitrogen fixer, forms large blooms in NO3-depleted tropical oceans and enhances CO2 sequestration by the ocean due to its ability to fix dissolved dinitrogen. Thus, its importance in C and N cycles requires better estimates of its distribution at basin to global scales. However, existing algorithms to detect them from satellite have not yet been successful in the South Western Tropical Pacific (SP. Here, a novel algorithm (TRICHOdesmium SATellite based on radiance anomaly spectra (RAS observed in SeaWiFS imagery, is used to detect Trichodesmium during the austral summertime in the SP (5° S–25° S 160° E–170° W. Selected pixels are characterized by a restricted range of parameters quantifying RAS spectra (e.g. slope, intercept, curvature. The fraction of valid (non-cloudy pixels identified as Trichodesmium surface blooms in the region is low (between 0.01 and 0.2 %, but is about 100 times higher than deduced from previous algorithms. At daily scales in the SP, this fraction represents a total ocean surface area varying from 16 to 48 km2 in Winter and from 200 to 1000 km2 in Summer (and at monthly scale, from 500 to 1000 km2 in Winter and from 3100 to 10 890 km2 in Summer with a maximum of 26 432 km2 in January 1999. The daily distribution of Trichodesmium surface accumulations in the SP detected by TRICHOSAT is presented for the period 1998–2010 which demonstrates that the number of selected pixels peaks in November–February each year, consistent with field observations. This approach was validated with in situ observations of Trichodesmium surface accumulations in the Melanesian archipelago around New Caledonia, Vanuatu and Fiji Islands for the same period.
Explanation of the surface peak in charge integrated LEIS spectra
Draxler, M; Taglauer, E; Schmid, K; Gruber, R; Ermolov, S N; Bauer, P
2003-01-01
Low energy ion scattering is very surface sensitive if scattered ions are analyzed. By time-of-flight (TOF) techniques, also neutral and charge integrated spectra (ions plus neutrals) can be obtained, which yield information about deeper layers. In the literature, the observation of a more or less pronounced surface peak was reported for charge integrated spectra, the intensity of the surface peak being higher at low energies and for heavy projectiles. Aiming at a more profound physical understanding of this surface peak, we performed TOF-experiments and computer simulations for He projectiles and a copper target. Experiments were done in the range 1-9 keV for a scattering angle of 129 deg. . The simulation was performed using the MARLOWE code for the given experimental parameters and a polycrystalline target. At low energies, a pronounced surface peak was observed, which fades away at higher energies. This peak is quantitatively reproduced by the simulation, and corresponds to scattering from approx 2 atomic...
Jeon, Namju; Lee, Hyeongcheol
2016-01-01
An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed. PMID:27973431
Jeon, Namju; Lee, Hyeongcheol
2016-12-12
An integrated fault-diagnosis algorithm for a motor sensor of in-wheel independent drive electric vehicles is presented. This paper proposes a method that integrates the high- and low-level fault diagnoses to improve the robustness and performance of the system. For the high-level fault diagnosis of vehicle dynamics, a planar two-track non-linear model is first selected, and the longitudinal and lateral forces are calculated. To ensure redundancy of the system, correlation between the sensor and residual in the vehicle dynamics is analyzed to detect and separate the fault of the drive motor system of each wheel. To diagnose the motor system for low-level faults, the state equation of an interior permanent magnet synchronous motor is developed, and a parity equation is used to diagnose the fault of the electric current and position sensors. The validity of the high-level fault-diagnosis algorithm is verified using Carsim and Matlab/Simulink co-simulation. The low-level fault diagnosis is verified through Matlab/Simulink simulation and experiments. Finally, according to the residuals of the high- and low-level fault diagnoses, fault-detection flags are defined. On the basis of this information, an integrated fault-diagnosis strategy is proposed.
Cost-Based Vertical Handover Decision Algorithm for WWAN/WLAN Integrated Networks
Directory of Open Access Journals (Sweden)
Kim LaeYoung
2009-01-01
Full Text Available Abstract Next generation wireless communications are expected to rely on integrated networks consisting of multiple wireless technologies. Heterogeneous networks based on Wireless Local Area Networks (WLANs and Wireless Wide Area Networks (WWANs can combine their respective advantages on coverage and data rates, offering a high Quality of Service (QoS to mobile users. In such environment, multi-interface terminals should seamlessly switch from one network to another in order to obtain improved performance or at least to maintain a continuous wireless connection. Therefore, network selection algorithm is important in providing better performance to the multi-interface terminals in the integrated networks. In this paper, we propose a cost-based vertical handover decision algorithm that triggers the Vertical Handover (VHO based on a cost function for WWAN/WLAN integrated networks. For the cost function, we focus on developing an analytical model of the expected cost of WLAN for the mobile users that enter the double-coverage area while having a connection in the WWAN. Our simulation results show that the proposed scheme achieves better performance in terms of power consumption and throughput than typical approach where WLANs are always preferred whenever the WLAN access is available.
Investigation of ALEGRA shock hydrocode algorithms using an exact free surface jet flow solution.
Energy Technology Data Exchange (ETDEWEB)
Hanks, Bradley Wright.; Robinson, Allen C
2014-01-01
Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
Surface-Enhanced Raman Spectroscopy Integrated Centrifugal Microfluidics Platform
DEFF Research Database (Denmark)
Durucan, Onur
This PhD thesis demonstrates (i) centrifugal microfluidics disc platform integrated with Au capped nanopillar (NP) substrates for surface-enhanced Raman spectroscopy (SERS) based sensing, and (ii) novel sample analysis concepts achieved by synergistical combination of sensing techniques and minia......This PhD thesis demonstrates (i) centrifugal microfluidics disc platform integrated with Au capped nanopillar (NP) substrates for surface-enhanced Raman spectroscopy (SERS) based sensing, and (ii) novel sample analysis concepts achieved by synergistical combination of sensing techniques...... dense array of NP structures. Furthermore, the wicking assisted nanofiltration procedure was accomplished in centrifugal microfluidics platform and as a result additional sample purification was achieved through the centrifugation process. In this way, the Au coated NP substrate was utilized...
Certain integrable system on a space associated with a quantum search algorithm
International Nuclear Information System (INIS)
Uwano, Y.; Hino, H.; Ishiwatari, Y.
2007-01-01
On thinking up a Grover-type quantum search algorithm for an ordered tuple of multiqubit states, a gradient system associated with the negative von Neumann entropy is studied on the space of regular relative configurations of multiqubit states (SR 2 CMQ). The SR 2 CMQ emerges, through a geometric procedure, from the space of ordered tuples of multiqubit states for the quantum search. The aim of this paper is to give a brief report on the integrability of the gradient dynamical system together with quantum information geometry of the underlying space, SR 2 CMQ, of that system
Open-source algorithm for detecting sea ice surface features in high-resolution optical imagery
Directory of Open Access Journals (Sweden)
N. C. Wright
2018-04-01
Full Text Available Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces control albedo and exert tremendous influence over the energy balance in the Arctic. Increasingly available meter- to decimeter-scale resolution optical imagery captures the evolution of the ice and ocean surface state visually, but methods for quantifying coverage of key surface types from raw imagery are not yet well established. Here we present an open-source system designed to provide a standardized, automated, and reproducible technique for processing optical imagery of sea ice. The method classifies surface coverage into three main categories: snow and bare ice, melt ponds and submerged ice, and open water. The method is demonstrated on imagery from four sensor platforms and on imagery spanning from spring thaw to fall freeze-up. Tests show the classification accuracy of this method typically exceeds 96 %. To facilitate scientific use, we evaluate the minimum observation area required for reporting a representative sample of surface coverage. We provide an open-source distribution of this algorithm and associated training datasets and suggest the community consider this a step towards standardizing optical sea ice imagery processing. We hope to encourage future collaborative efforts to improve the code base and to analyze large datasets of optical sea ice imagery.
Open-source algorithm for detecting sea ice surface features in high-resolution optical imagery
Wright, Nicholas C.; Polashenski, Chris M.
2018-04-01
Snow, ice, and melt ponds cover the surface of the Arctic Ocean in fractions that change throughout the seasons. These surfaces control albedo and exert tremendous influence over the energy balance in the Arctic. Increasingly available meter- to decimeter-scale resolution optical imagery captures the evolution of the ice and ocean surface state visually, but methods for quantifying coverage of key surface types from raw imagery are not yet well established. Here we present an open-source system designed to provide a standardized, automated, and reproducible technique for processing optical imagery of sea ice. The method classifies surface coverage into three main categories: snow and bare ice, melt ponds and submerged ice, and open water. The method is demonstrated on imagery from four sensor platforms and on imagery spanning from spring thaw to fall freeze-up. Tests show the classification accuracy of this method typically exceeds 96 %. To facilitate scientific use, we evaluate the minimum observation area required for reporting a representative sample of surface coverage. We provide an open-source distribution of this algorithm and associated training datasets and suggest the community consider this a step towards standardizing optical sea ice imagery processing. We hope to encourage future collaborative efforts to improve the code base and to analyze large datasets of optical sea ice imagery.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
International Nuclear Information System (INIS)
Quirk, Thomas J. IV
2004-01-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Bulyha, Alena
2011-01-01
In this work, a Monte-Carlo algorithm in the constant-voltage ensemble for the calculation of 3d charge concentrations at charged surfaces functionalized with biomolecules is presented. The motivation for this work is the theoretical understanding of biofunctionalized surfaces in nanowire field-effect biosensors (BioFETs). This work provides the simulation capability for the boundary layer that is crucial in the detection mechanism of these sensors; slight changes in the charge concentration in the boundary layer upon binding of analyte molecules modulate the conductance of nanowire transducers. The simulation of biofunctionalized surfaces poses special requirements on the Monte-Carlo simulations and these are addressed by the algorithm. The constant-voltage ensemble enables us to include the right boundary conditions; the dna strands can be rotated with respect to the surface; and several molecules can be placed in a single simulation box to achieve good statistics in the case of low ionic concentrations relevant in experiments. Simulation results are presented for the leading example of surfaces functionalized with pna and with single- and double-stranded dna in a sodium-chloride electrolyte. These quantitative results make it possible to quantify the screening of the biomolecule charge due to the counter-ions around the biomolecules and the electrical double layer. The resulting concentration profiles show a three-layer structure and non-trivial interactions between the electric double layer and the counter-ions. The numerical results are also important as a reference for the development of simpler screening models. © 2011 The Royal Society of Chemistry.
A Fair Resource Allocation Algorithm for Data and Energy Integrated Communication Networks
Directory of Open Access Journals (Sweden)
Qin Yu
2016-01-01
Full Text Available With the rapid advancement of wireless network technologies and the rapid increase in the number of mobile devices, mobile users (MUs have an increasing high demand to access the Internet with guaranteed quality-of-service (QoS. Data and energy integrated communication networks (DEINs are emerging as a new type of wireless networks that have the potential to simultaneously transfer wireless energy and information via the same base station (BS. This means that a physical BS is virtualized into two parts: one is transferring energy and the other is transferring information. The former is called virtual energy base station (eBS and the latter is named as data base station (dBS. One important issue in such setting is dynamic resource allocation. Here the resource concerned includes both power and time. In this paper, we propose a fair data-and-energy resource allocation algorithm for DEINs by jointly designing the downlink energy beamforming and a power-and-time allocation scheme, with the consideration of finite capacity batteries at MUs and power sensitivity of radio frequency (RF to direct current (DC conversion circuits. Simulation results demonstrate that our proposed algorithm outperforms the existing algorithms in terms of fairness, beamforming design, sensitivity, and average throughput.
Directory of Open Access Journals (Sweden)
Balgaisha Mukanova
2017-01-01
Full Text Available The problem of electrical sounding of a medium with ground surface relief is modelled using the integral equations method. This numerical method is based on the triangulation of the computational domain, which is adapted to the shape of the relief and the measuring line. The numerical algorithm is tested by comparing the results with the known solution for horizontally layered media with two layers. Calculations are also performed to verify the fulfilment of the “reciprocity principle” for the 4-electrode installations in our numerical model. Simulations are then performed for a two-layered medium with a surface relief. The quantitative influences of the relief, the resistivity ratios of the contacting media, and the depth of the second layer on the apparent resistivity curves are established.
Feshchenko, R. M.; Vinogradov, A. V.; Artyukov, I. A.
2018-04-01
Using the method of Laplace transform the field amplitude in the paraxial approximation is found in the two-dimensional free space using initial values of the amplitude specified on an arbitrary shaped monotonic curve. The obtained amplitude depends on one a priori unknown function, which can be found from a Volterra first kind integral equation. In a special case of field amplitude specified on a concave parabolic curve the exact solution is derived. Both solutions can be used to study the light propagation from arbitrary surfaces including grazing incidence X-ray mirrors. They can find applications in the analysis of coherent imaging problems of X-ray optics, in phase retrieval algorithms as well as in inverse problems in the cases when the initial field amplitude is sought on a curved surface.
Directory of Open Access Journals (Sweden)
Andreas König
2009-11-01
Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Directory of Open Access Journals (Sweden)
E. Dall'Asta
2014-06-01
Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Pasik, Tomasz; van der Meij, Raymond
2017-12-01
This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.
International Nuclear Information System (INIS)
Hoffstadt, Thorben; Griese, Martin; Maas, Jürgen
2014-01-01
Transducers based on dielectric electroactive polymers (DEAP) use electrostatic pressure to convert electric energy into strain energy or vice versa. Besides this, they are also designed for sensor applications in monitoring the actual stretch state on the basis of the deformation dependent capacitive–resistive behavior of the DEAP. In order to enable an efficient and proper closed loop control operation of these transducers, e.g. in positioning or energy harvesting applications, on the one hand, sensors based on DEAP material can be integrated into the transducers and evaluated externally, and on the other hand, the transducer itself can be used as a sensor, also in terms of self-sensing. For this purpose the characteristic electrical behavior of the transducer has to be evaluated in order to determine the mechanical state. Also, adequate online identification algorithms with sufficient accuracy and dynamics are required, independent from the sensor concept utilized, in order to determine the electrical DEAP parameters in real time. Therefore, in this contribution, algorithms are developed in the frequency domain for identifications of the capacitance as well as the electrode and polymer resistance of a DEAP, which are validated by measurements. These algorithms are designed for self-sensing applications, especially if the power electronics utilized is operated at a constant switching frequency, and parasitic harmonic oscillations are induced besides the desired DC value. These oscillations can be used for the online identification, so an additional superimposed excitation is no longer necessary. For this purpose a dual active bridge (DAB) is introduced to drive the DEAP transducer. The capabilities of the real-time identification algorithm in combination with the DAB are presented in detail and discussed, finally. (paper)
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular
Zhang, Qing; Beard, Daniel A; Schlick, Tamar
2003-12-01
Salt-mediated electrostatics interactions play an essential role in biomolecular structures and dynamics. Because macromolecular systems modeled at atomic resolution contain thousands of solute atoms, the electrostatic computations constitute an expensive part of the force and energy calculations. Implicit solvent models are one way to simplify the model and associated calculations, but they are generally used in combination with standard atomic models for the solute. To approximate electrostatics interactions in models on the polymer level (e.g., supercoiled DNA) that are simulated over long times (e.g., milliseconds) using Brownian dynamics, Beard and Schlick have developed the DiSCO (Discrete Surface Charge Optimization) algorithm. DiSCO represents a macromolecular complex by a few hundred discrete charges on a surface enclosing the system modeled by the Debye-Hückel (screened Coulombic) approximation to the Poisson-Boltzmann equation, and treats the salt solution as continuum solvation. DiSCO can represent the nucleosome core particle (>12,000 atoms), for example, by 353 discrete surface charges distributed on the surfaces of a large disk for the nucleosome core particle and a slender cylinder for the histone tail; the charges are optimized with respect to the Poisson-Boltzmann solution for the electric field, yielding a approximately 5.5% residual. Because regular surfaces enclosing macromolecules are not sufficiently general and may be suboptimal for certain systems, we develop a general method to construct irregular models tailored to the geometry of macromolecules. We also compare charge optimization based on both the electric field and electrostatic potential refinement. Results indicate that irregular surfaces can lead to a more accurate approximation (lower residuals), and the refinement in terms of the electric field is more robust. We also show that surface smoothing for irregular models is important, that the charge optimization (by the TNPACK
Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm
Directory of Open Access Journals (Sweden)
Offer Rozenstein
2014-03-01
Full Text Available Land surface temperature (LST is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS. This paper presents an adjustment of the split window algorithm (SWA for TIRS that uses atmospheric transmittance and land surface emissivity (LSE as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.
Use of surface electromyography in phonation studies: an integrative review
Balata, Patricia Maria Mendes; Silva, Hilton Justino da; Moraes, Kyvia Juliana Rocha de; Pernambuco, Leandro de Araújo; Moraes, Sílvia Regina Arruda de
2013-01-01
Summary Introduction: Surface electromyography has been used to assess the extrinsic laryngeal muscles during chewing and swallowing, but there have been few studies assessing these muscles during phonation. Objective: To investigate the current state of knowledge regarding the use of surface electromyography for evaluation of the electrical activity of the extrinsic muscles of the larynx during phonation by means of an integrative review. Method: We searched for articles and other papers in the PubMed, Medline/Bireme, and Scielo databases that were published between 1980 and 2012, by using the following descriptors: surface electromyography and voice, surface electromyography and phonation, and surface electromyography and dysphonia. The articles were selectedon the basis ofinclusion and exclusion criteria. Data Synthesis: This was carried out with a cross critical matrix. We selected 27 papers,i.e., 24 articles and 3 theses. The studies differed methodologically with regards to sample size and investigation techniques, making it difficult to compare them, but showed differences in electrical activity between the studied groups (dysphonicsubjects, non-dysphonicsubjects, singers, and others). Conclusion: Electromyography has clinical applicability when technical precautions with respect to application and analysis are obeyed. However, it is necessary to adopt a universal system of assessment tasks and related measurement techniques to allow comparisons between studies. PMID:25992030
Use of surface electromyography in phonation studies: an integrative review
Directory of Open Access Journals (Sweden)
Balata, Patricia Maria Mendes
2014-01-01
Full Text Available Introduction: Surface electromyography has been used to assess the extrinsic laryngeal muscles during chewing and swallowing, but there have been few studies assessing these muscles during phonation. Objective: To investigate the current state of knowledge regarding the use of surface electromyography for evaluation of the electrical activity of the extrinsic muscles of the larynx during phonation by means of an integrative review. Method: We searched for articles and other papers in the PubMed, Medline/Bireme, and Scielo databases that were published between 1980 and 2012, by using the following descriptors: surface electromyography and voice, surface electromyography and phonation, and surface electromyography and dysphonia. The articles were selectedon the basis ofinclusion and exclusion criteria. Data Synthesis: This was carried out with a cross critical matrix. We selected 27 papers,i.e., 24 articles and 3 theses. The studies differed methodologically with regards to sample size and investigation techniques, making it difficult to compare them, but showed differences in electrical activity between the studied groups (dysphonicsubjects, non-dysphonicsubjects, singers, and others. Conclusion: Electromyography has clinical applicability when technical precautions with respect to application and analysis are obeyed. However, it is necessary to adopt a universal system of assessment tasks and related measurement techniques to allow comparisons between studies.
Soliton surfaces associated with generalized symmetries of integrable equations
International Nuclear Information System (INIS)
Grundland, A M; Post, S
2011-01-01
In this paper, based on the Fokas et al approach (Fokas and Gel'fand 1996 Commun. Math. Phys. 177 203-20; Fokas et al 2000 Sel. Math. 6 347-75), we provide a symmetry characterization of continuous deformations of soliton surfaces immersed in a Lie algebra using the formalism of generalized vector fields, their prolongation structure and links with the Frechet derivatives. We express the necessary and sufficient condition for the existence of such surfaces in terms of the invariance criterion for generalized symmetries and identify additional sufficient conditions which admit an explicit integration of the immersion functions of 2D surfaces in Lie algebras. We discuss in detail the su(N)-valued immersion functions generated by conformal symmetries of the CP N-1 sigma model defined on either the Minkowski or Euclidean space. We further show that the sufficient conditions for explicit integration of such immersion functions impose additional restrictions on the admissible conformal symmetries of the model defined on Minkowski space. On the other hand, the sufficient conditions are identically satisfied for arbitrary conformal symmetries of finite action solutions of the CP N-1 sigma model defined on Euclidean space.
Directory of Open Access Journals (Sweden)
Yang Bai
2015-04-01
Full Text Available As a critical variable to characterize the biophysical processes in ecological environment, and as a key indicator in the surface energy balance, evapotranspiration and urban heat islands, Land Surface Temperature (LST retrieved from Thermal Infra-Red (TIR images at both high temporal and spatial resolution is in urgent need. However, due to the limitations of the existing satellite sensors, there is no earth observation which can obtain TIR at detailed spatial- and temporal-resolution simultaneously. Thus, several attempts of image fusion by blending the TIR data from high temporal resolution sensor with data from high spatial resolution sensor have been studied. This paper presents a novel data fusion method by integrating image fusion and spatio-temporal fusion techniques, for deriving LST datasets at 30 m spatial resolution from daily MODIS image and Landsat ETM+ images. The Landsat ETM+ TIR data were firstly enhanced based on extreme learning machine (ELM algorithm using neural network regression model, from 60 m to 30 m resolution. Then, the MODIS LST and enhanced Landsat ETM+ TIR data were fused by Spatio-temporal Adaptive Data Fusion Algorithm for Temperature mapping (SADFAT in order to derive high resolution synthetic data. The synthetic images were evaluated for both testing and simulated satellite images. The average difference (AD and absolute average difference (AAD are smaller than 1.7 K, where the correlation coefficient (CC and root-mean-square error (RMSE are 0.755 and 1.824, respectively, showing that the proposed method enhances the spatial resolution of the predicted LST images and preserves the spectral information at the same time.
An Assessment of Surface Water Detection Algorithms for the Tahoua Region, Niger
Herndon, K. E.; Muench, R.; Cherrington, E. A.; Griffin, R.
2017-12-01
The recent release of several global surface water datasets derived from remotely sensed data has allowed for unprecedented analysis of the earth's hydrologic processes at a global scale. However, some of these datasets fail to identify important sources of surface water, especially small ponds, in the Sahel, an arid region of Africa that forms a border zone between the Sahara Desert to the north, and the savannah to the south. These ponds may seem insignificant in the context of wider, global-scale hydrologic processes, but smaller sources of water are important for local and regional assessments. Particularly, these smaller water bodies are significant sources of hydration and irrigation for nomadic pastoralists and smallholder farmers throughout the Sahel. For this study, several methods of identifying surface water from Landsat 8 OLI and Sentinel 1 SAR data were compared to determine the most effective means of delineating these features in the Tahoua Region of Niger. The Modified Normalized Difference Water Index (MNDWI) had the best performance when validated against very high resolution World View 3 imagery, with an overall accuracy of 99.48%. This study reiterates the importance of region-specific algorithms and suggests that the MNDWI method may be the best for delineating surface water in the Sahelian ecozone, likely due to the nature of the exposed geology and lack of dense green vegetation.
Accurate fluid force measurement based on control surface integration
Lentink, David
2018-01-01
Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non
Integrated Optical Components Utilizing Long-Range Surface Plasmon Polaritons
DEFF Research Database (Denmark)
Boltasseva, Alexandra; Nikolajsen, Thomas; Leosson, Kristjan
2005-01-01
New optical waveguide technology for integrated optics, based on propagation of long-range surface plasmon polaritons (LR-SPPs) along metal stripes embedded in dielectric, is presented. Guiding and routing of electromagnetic radiation along nanometer-thin and micrometer-wide gold stripes embedded......), and a bend loss of ~5 dB for a bend radius of 15 mm are evaluated for 15-nm-thick and 8-mm-wide stripes at the wavelength of 1550 nm. LR-SPP-based 3-dB power Y-splitters, multimode interference waveguides, and directional couplers are demonstrated and investigated. At 1570 nm, coupling lengths of 1.9 and 0...
Surface charge algebra in gauge theories and thermodynamic integrability
International Nuclear Information System (INIS)
Barnich, Glenn; Compere, Geoffrey
2008-01-01
Surface charges and their algebra in interacting Lagrangian gauge field theories are constructed out of the underlying linearized theory using techniques from the variational calculus. In the case of exact solutions and symmetries, the surface charges are interpreted as a Pfaff system. Integrability is governed by Frobenius' theorem and the charges associated with the derived symmetry algebra are shown to vanish. In the asymptotic context, we provide a generalized covariant derivation of the result that the representation of the asymptotic symmetry algebra through charges may be centrally extended. Comparison with Hamiltonian and covariant phase space methods is made. All approaches are shown to agree for exact solutions and symmetries while there are differences in the asymptotic context
Integrated immunoassay using tuneable surface acoustic waves and lensfree detection.
Bourquin, Yannyk; Reboud, Julien; Wilson, Rab; Zhang, Yi; Cooper, Jonathan M
2011-08-21
The diagnosis of infectious diseases in the Developing World is technologically challenging requiring complex biological assays with a high analytical performance, at minimal cost. By using an opto-acoustic immunoassay technology, integrating components commonly used in mobile phone technologies, including surface acoustic wave (SAW) transducers to provide pressure driven flow and a CMOS camera to enable lensfree detection technique, we demonstrate the potential to produce such an assay. To achieve this, antibody functionalised microparticles were manipulated on a low-cost disposable cartridge using the surface acoustic waves and were then detected optically. Our results show that the biomarker, interferon-γ, used for the diagnosis of diseases such as latent tuberculosis, can be detected at pM concentrations, within a few minutes (giving high sensitivity at a minimal cost). This journal is © The Royal Society of Chemistry 2011
Energy Technology Data Exchange (ETDEWEB)
Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)
2014-04-15
Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart
Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao
2014-01-01
A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919
Therapeutic eyelids hygiene in the algorithms of prevention and treatment of ocular surface diseases
Directory of Open Access Journals (Sweden)
V. N. Trubilin
2016-01-01
Full Text Available When acute inflammation in anterior eye segment of a forward piece of an eye was stopped, ophthalmologists face a problem of absence of acute inflammation signs and at the same time complaints to the remain discomfort feelings. It causes dissatisfaction from the treatment. The complaints are typically caused by disturbance of tears productions. No accidental that the new group of diseases was allocated — the diseases of the ocular surface. Ocular surface is a difficult biologic system, including epithelium of the conjunctiva, cornea and limb, as well as the area costal margin eyelid and meibomian gland ducts. Pathological processes in conjunctiva, cornea and eyelids are linked with tears production. Ophthalmologists prescribes tears substitutions, providing short-term relief to patients. However, in respect that the lipid component of the tear film plays the key role in the preservation of its stability, eyelids hygiene is the basis for the treatment of dry eye associated with ocular surface diseases. Eyelids hygiene provides normal functioning of glands, restores the metabolic processes in skin and ensures the formation of a complete tear film. Protection of eyelids, especially the marginal edge from aggressive environmental agents, infections and parasites and is the basis for the prevention and treatment of blepharitis and dry eye syndrome. The most common clinical situations and algorithms of their treatment and prevention of dysfunction of the meibomian glands; demodectic blepharitis; seborrheic blepharitis; staphylococcal blepharitis; allergic blepharitis; barley and chalazion are discussed in the article. The prevention keratoconjunctival xerosis (before and postoperative period, caused by contact lenses, computer vision syndrome, remission after acute conjunctiva and cornea inflammation is also presented. The first part of the article presents the treatment and prevention algorithms for dysfunction of the meibomian glands, as well as
Integrating R and Java for Enhancing Interactivity of Algorithmic Data Analysis Software Solutions
Directory of Open Access Journals (Sweden)
Titus Felix FURTUNĂ
2016-06-01
Full Text Available Conceiving software solutions for statistical processing and algorithmic data analysis involves handling diverse data, fetched from various sources and in different formats, and presenting the results in a suggestive, tailorable manner. Our ongoing research aims to design programming technics for integrating R developing environment with Java programming language for interoperability at a source code level. The goal is to combine the intensive data processing capabilities of R programing language, along with the multitude of statistical function libraries, with the flexibility offered by Java programming language and platform, in terms of graphical user interface and mathematical function libraries. Both developing environments are multiplatform oriented, and can complement each other through interoperability. R is a comprehensive and concise programming language, benefiting from a continuously expanding and evolving set of packages for statistical analysis, developed by the open source community. While is a very efficient environment for statistical data processing, R platform lacks support for developing user friendly, interactive, graphical user interfaces (GUIs. Java on the other hand, is a high level object oriented programming language, which supports designing and developing performant and interactive frameworks for general purpose software solutions, through Java Foundation Classes, JavaFX and various graphical libraries. In this paper we treat both aspects of integration and interoperability that refer to integrating Java code into R applications, and bringing R processing sequences into Java driven software solutions. Our research has been conducted focusing on case studies concerning pattern recognition and cluster analysis.
A robust, efficient and accurate β- pdf integration algorithm in nonpremixed turbulent combustion
International Nuclear Information System (INIS)
Liu, H.; Lien, F.S.; Chui, E.
2005-01-01
Among many presumed-shape pdf approaches, the presumed β-function pdf is widely used in nonpremixed turbulent combustion models in the literature. However, singularity difficulties at Z = 0 and 1, Z being the mixture fraction, may be encountered in the numerical integration of the b-function pdf and there are few publications addressing this issue to date. The present study proposes an efficient, robust and accurate algorithm to overcome these numerical difficulties. The present treatment of the β-pdf integration is firstly used in the Burke-Schumann solution in conjunction with the k - ε turbulent model in the case of CH 4 /H 2 bluff-body jets and flames. Afterward it is extended to a more complex model, the laminar flamelet model, for the same flow. Numerical results obtained by using the proposed β-pdf integration method are compared to experimental values of the velocity field, temperature and constituent mass fraction to illustrate the efficiency and accuracy of the present method. (author)
International Nuclear Information System (INIS)
Chen, Wang Chih; Chen Jahau Lewis
2014-01-01
The work proposes a new design tool that integrates design-around concepts with the algorithm for inventive problem solving (Russian acronym: ARIZ). ARIZ includes a complete procedure for analyzing problems and related resource, resolving conflicts and generating solutions. The combination of ARIZ and design-around concepts and understanding identified principles that govern patent infringements can prevent patent infringements whenever designers innovate, greatly reducing the cost and time associated with the product design stage. The presented tool is developed from an engineering perspective rather than a legal perspective, and so can help designers easily to prevent patent infringements and succeed in innovating by designing around. An example is used to demonstrate the proposed method.
Directory of Open Access Journals (Sweden)
Wei Xiaozhao
2016-03-01
Full Text Available For the development of the construction industry, the construction of data era is approaching, BIM (building information model with the actual needs of the construction industry has been widely used as a building information clan system software, different software for the practical application of different maturity, through the expert scoring method for the application of BIM technology maturity index mark, establish the evaluation index system, using PCA - Q clustering algorithm for the evaluation index system of classification, comprehensive evaluation in combination with the Choquet integral on the classification of evaluation index system, to achieve a reasonable assessment of the application of BIM technology maturity index. To lay a foundation for the future development of BIM Technology in various fields of construction, at the same time provides direction for the comprehensive application of BIM technology.
Optimal multigrid algorithms for the massive Gaussian model and path integrals
International Nuclear Information System (INIS)
Brandt, A.; Galun, M.
1996-01-01
Multigrid algorithms are presented which, in addition to eliminating the critical slowing down, can also eliminate the open-quotes volume factorclose quotes. The elimination of the volume factor removes the need to produce many independent fine-grid configurations for averaging out their statistical deviations, by averaging over the many samples produced on coarse grids during the multigrid cycle. Thermodynamic limits of observables can be calculated to relative accuracy var-epsilon r in just O(var-epsilon r -2 ) computer operations, where var-epsilon r is the error relative to the standard deviation of the observable. In this paper, we describe in detail the calculation of the susceptibility in the one-dimensional massive Gaussian model, which is also a simple example of path integrals. Numerical experiments show that the susceptibility can be calculated to relative accuracy var-epsilon r in about 8 var-epsilon r -2 random number generations, independent of the mass size
Parallel algorithms for quantum chemistry. I. Integral transformations on a hypercube multiprocessor
International Nuclear Information System (INIS)
Whiteside, R.A.; Binkley, J.S.; Colvin, M.E.; Schaefer, H.F. III
1987-01-01
For many years it has been recognized that fundamental physical constraints such as the speed of light will limit the ultimate speed of single processor computers to less than about three billion floating point operations per second (3 GFLOPS). This limitation is becoming increasingly restrictive as commercially available machines are now within an order of magnitude of this asymptotic limit. A natural way to avoid this limit is to harness together many processors to work on a single computational problem. In principle, these parallel processing computers have speeds limited only by the number of processors one chooses to acquire. The usefulness of potentially unlimited processing speed to a computationally intensive field such as quantum chemistry is obvious. If these methods are to be applied to significantly larger chemical systems, parallel schemes will have to be employed. For this reason we have developed distributed-memory algorithms for a number of standard quantum chemical methods. We are currently implementing these on a 32 processor Intel hypercube. In this paper we present our algorithm and benchmark results for one of the bottleneck steps in quantum chemical calculations: the four index integral transformation
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
AATSR land surface temperature product algorithm verification over a WATERMED site
Noyes, E. J.; Sòria, G.; Sobrino, J. A.; Remedios, J. J.; Llewellyn-Jones, D. T.; Corlett, G. K.
A new operational Land Surface Temperature (LST) product generated from data acquired by the Advanced Along-Track Scanning Radiometer (AATSR) provides the opportunity to measure LST on a global scale with a spatial resolution of 1 km2. The target accuracy of the product, which utilises nadir data from the AATSR thermal channels at 11 and 12 μm, is 2.5 K for daytime retrievals and 1.0 K at night. We present the results of an experiment where the performance of the algorithm has been assessed for one daytime and one night time overpass occurring over the WATERMED field site near Marrakech, Morocco, on 05 March 2003. Top of atmosphere (TOA) brightness temperatures (BTs) are simulated for 12 pixels from each overpass using a radiative transfer model, with the LST product and independent emissivity values and atmospheric data as inputs. We have estimated the error in the LST product over this biome for this set of conditions by applying the operational AATSR LST retrieval algorithm to the modelled BTs and comparing the results with the original AATSR LSTs input into the model. An average bias of -1.00 K (standard deviation 0.07 K) for the daytime data, and -1.74 K (standard deviation 0.02 K) for the night time data is obtained, which indicates that the algorithm is yielding an LST that is too cold under these conditions. While these results are within specification for daytime retrievals, this suggests that the target accuracy of 1.0 K at night is not being met within this biome.
The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm
Noh, Myoung-Jong; Howat, Ian M.
2017-07-01
Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.
Directory of Open Access Journals (Sweden)
Bisakha Ray
2017-08-01
Full Text Available The amounts and types of available multimodal tumor data are rapidly increasing, and their integration is critical for fully understanding the underlying cancer biology and personalizing treatment. However, the development of methods for effectively integrating multimodal data in a principled manner is lagging behind our ability to generate the data. In this article, we introduce an extension to a multiview nonnegative matrix factorization algorithm (NNMF for dimensionality reduction and integration of heterogeneous data types and compare the predictive modeling performance of the method on unimodal and multimodal data. We also present a comparative evaluation of our novel multiview approach and current data integration methods. Our work provides an efficient method to extend an existing dimensionality reduction method. We report rigorous evaluation of the method on large-scale quantitative protein and phosphoprotein tumor data from the Clinical Proteomic Tumor Analysis Consortium (CPTAC acquired using state-of-the-art liquid chromatography mass spectrometry. Exome sequencing and RNA-Seq data were also available from The Cancer Genome Atlas for the same tumors. For unimodal data, in case of breast cancer, transcript levels were most predictive of estrogen and progesterone receptor status and copy number variation of human epidermal growth factor receptor 2 status. For ovarian and colon cancers, phosphoprotein and protein levels were most predictive of tumor grade and stage and residual tumor, respectively. When multiview NNMF was applied to multimodal data to predict outcomes, the improvement in performance is not overall statistically significant beyond unimodal data, suggesting that proteomics data may contain more predictive information regarding tumor phenotypes than transcript levels, probably due to the fact that proteins are the functional gene products and therefore a more direct measurement of the functional state of the tumor. Here, we
Integrated algorithms for RFID-based multi-sensor indoor/outdoor positioning solutions
Zhu, Mi.; Retscher, G.; Zhang, K.
2011-12-01
Position information is very important as people need it almost everywhere all the time. However, it is a challenging task to provide precise positions indoor/outdoor seamlessly. Outdoor positioning has been widely studied and accurate positions can usually be achieved by well developed GPS techniques but these techniques are difficult to be used indoors since GPS signal reception is limited. The alternative techniques that can be used for indoor positioning include, to name a few, Wireless Local Area Network (WLAN), bluetooth and Ultra Wideband (UWB) etc.. However, all of these have limitations. The main objectives of this paper are to investigate and develop algorithms for a low-cost and portable indoor personal positioning system using Radio Frequency Identification (RFID) and its integration with other positioning systems. An RFID system consists of three components, namely a control unit, an interrogator and a transponder that transmits data and communicates with the reader. An RFID tag can be incorporated into a product, animal or person for the purpose of identification and tracking using radio waves. In general, for RFID positioning in urban and indoor environments three different methods can be used, including cellular positioning, trilateration and location fingerprinting. In addition, the integration of RFID with other technologies is also discussed in this paper. A typical combination is to integrate RFID with relative positioning technologies such as MEMS INS to bridge the gaps between RFID tags for continuous positioning applications. Experiments are shown to demonstrate the improvements of integrating multiple sensors with RFID which can be employed successfully for personal positioning.
Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm
Directory of Open Access Journals (Sweden)
Daeho Jang
2015-09-01
Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.
Li, Shaoxin; Li, Linfang; Zeng, Qiuyao; Zhang, Yanjiao; Guo, Zhouyi; Liu, Zhiming; Jin, Mei; Su, Chengkang; Lin, Lin; Xu, Junfa; Liu, Songhao
2015-05-01
This study aims to characterize and classify serum surface-enhanced Raman spectroscopy (SERS) spectra between bladder cancer patients and normal volunteers by genetic algorithms (GAs) combined with linear discriminate analysis (LDA). Two group serum SERS spectra excited with nanoparticles are collected from healthy volunteers (n = 36) and bladder cancer patients (n = 55). Six diagnostic Raman bands in the regions of 481-486, 682-687, 1018-1034, 1313-1323, 1450-1459 and 1582-1587 cm-1 related to proteins, nucleic acids and lipids are picked out with the GAs and LDA. By the diagnostic models built with the identified six Raman bands, the improved diagnostic sensitivity of 90.9% and specificity of 100% were acquired for classifying bladder cancer patients from normal serum SERS spectra. The results are superior to the sensitivity of 74.6% and specificity of 97.2% obtained with principal component analysis by the same serum SERS spectra dataset. Receiver operating characteristic (ROC) curves further confirmed the efficiency of diagnostic algorithm based on GA-LDA technique. This exploratory work demonstrates that the serum SERS associated with GA-LDA technique has enormous potential to characterize and non-invasively detect bladder cancer through peripheral blood.
Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping
Directory of Open Access Journals (Sweden)
Antero Kukko
2008-09-01
Full Text Available Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.
Raimee, N. A.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.
2017-09-01
The plastic injection moulding process produces large numbers of parts of high quality with great accuracy and quickly. It has widely used for production of plastic part with various shapes and geometries. Side arm is one of the product using injection moulding to manufacture it. However, there are some difficulties in adjusting the parameter variables which are mould temperature, melt temperature, packing pressure, packing time and cooling time as there are warpage happen at the tip part of side arm. Therefore, the work reported herein is about minimizing warpage on side arm product by optimizing the process parameter using Response Surface Methodology (RSM) and with additional artificial intelligence (AI) method which is Genetic Algorithm (GA).
Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images
Diner, D.; Paradise, S.; Martonchik, J.
1994-01-01
In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.
Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques
2011-08-01
SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23
Directory of Open Access Journals (Sweden)
Mahmood Mahmoodi-Eshkaftaki
2013-01-01
Full Text Available In this study, the effect of seed moisture content, probe diameter and loading velocity (puncture conditions on some mechanical properties of almond kernel and peeled almond kernel is considered to model a relationship between the puncture conditions and rupture energy. Furthermore, distribution of the mechanical properties is determined. The main objective is to determine the critical values of mechanical properties significant for peeling machines. The response surface methodology was used to find the relationship between the input parameters and the output responses, and the fitness function was applied to measure the optimal values using the genetic algorithm. Two-parameter Weibull function was used to describe the distribution of mechanical properties. Based on the Weibull parameter values, i.e. shape parameter (β and scale parameter (η calculated for each property, the mechanical distribution variations were completely described and it was confirmed that the mechanical properties are rule governed, which makes the Weibull function suitable for estimating their distributions. The energy model estimated using response surface methodology shows that the mechanical properties relate exponentially to the moisture, and polynomially to the loading velocity and probe diameter, which enabled successful estimation of the rupture energy (R²=0.94. The genetic algorithm calculated the critical values of seed moisture, probe diameter, and loading velocity to be 18.11 % on dry mass basis, 0.79 mm, and 0.15 mm/min, respectively, and optimum rupture energy of 1.97·10-³ J. These conditions were used for comparison with new samples, where the rupture energy was experimentally measured to be 2.68 and 2.21·10-³ J for kernel and peeled kernel, respectively, which was nearly in agreement with our model results.
Surface profile measurement by using the integrated Linnik WLSI and confocal microscope system
Wang, Wei-Chung; Shen, Ming-Hsing; Hwang, Chi-Hung; Yu, Yun-Ting; Wang, Tzu-Fong
2017-06-01
The white-light scanning interferometer (WLSI) and confocal microscope (CM) are the two major optical inspection systems for measuring three-dimensional (3D) surface profile (SP) of micro specimens. Nevertheless, in practical applications, WLSI is more suitable for measuring smooth and low-slope surfaces. On the other hand, CM is more suitable for measuring uneven-reflective and low-reflective surfaces. As for aspect of surface profiles to be measured, the characteristics of WLSI and CM are also different. WLSI is generally used in semiconductor industry while CM is more popular in printed circuit board industry. In this paper, a self-assembled multi-function optical system was integrated to perform Linnik white-light scanning interferometer (Linnik WLSI) and CM. A connecting part composed of tubes, lenses and interferometer was used to conjunct finite and infinite optical systems for Linnik WLSI and CM in the self-assembled optical system. By adopting the flexibility of tubes and lenses, switching to perform two different optical measurements can be easily achieved. Furthermore, based on the shape from focus method with energy of Laplacian filter, the CM was developed to enhance the on focal information of each pixel so that the CM can provide all-in-focus image for performing the 3D SP measurement and analysis simultaneously. As for Linnik WLSI, eleven-step phase shifting algorithm was used to analyze vertical scanning signals and determine the 3D SP.
System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation
Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.
2016-01-01
The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.
Improved Genetic Algorithm-Based Unit Commitment Considering Uncertainty Integration Method
Directory of Open Access Journals (Sweden)
Kyu-Hyung Jo
2018-05-01
Full Text Available In light of the dissemination of renewable energy connected to the power grid, it has become necessary to consider the uncertainty in the generation of renewable energy as a unit commitment (UC problem. A methodology for solving the UC problem is presented by considering various uncertainties, which are assumed to have a normal distribution, by using a Monte Carlo simulation. Based on the constructed scenarios for load, wind, solar, and generator outages, a combination of scenarios is found that meets the reserve requirement to secure the power balance of the power grid. In those scenarios, the uncertainty integration method (UIM identifies the best combination by minimizing the additional reserve requirements caused by the uncertainty of power sources. An integration process for uncertainties is formulated for stochastic unit commitment (SUC problems and optimized by the improved genetic algorithm (IGA. The IGA is composed of five procedures and finds the optimal combination of unit status at the scheduled time, based on the determined source data. According to the number of unit systems, the IGA demonstrates better performance than the other optimization methods by applying reserve repairing and an approximation process. To account for the result of the proposed method, various UC strategies are tested with a modified 24-h UC test system and compared.
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
International Nuclear Information System (INIS)
Galán, J; Verleysen, P; Lebensohn, R A
2014-01-01
A new algorithm for the solution of the deformation of a polycrystalline material using a self-consistent scheme, and its integration as part of the finite element software Abaqus/Standard are presented. The method is based on the original VPSC formulation by Lebensohn and Tomé and its integration with Abaqus/Standard by Segurado et al. The new algorithm has been implemented as a set of Fortran 90 modules, to be used either from a standalone program or from Abaqus subroutines. The new implementation yields the same results as VPSC7, but with a significantly better performance, especially when used in multicore computers. (paper)
Tool life and surface integrity aspects when drilling nickel alloy
Kannan, S.; Pervaiz, S.; Vincent, S.; Karthikeyan, R.
2018-04-01
. Overall the results indicate that the effect of drilling and milling parameters is most marked in terms of surface quality in the circumferential direction. Material removal rates and tool flank wear must be maintained within the control limits to maintain hole integrity.
Soliton surfaces and generalized symmetries of integrable systems
International Nuclear Information System (INIS)
Grundland, A M; Riglioni, D; Post, S
2014-01-01
In this paper, we discuss some specific features of symmetries of integrable systems which can be used to construct the Fokas–Gel’fand formula for the immersion of 2D-soliton surfaces, associated with such systems, in Lie algebras. We establish a sufficient condition for the applicability of this formula. This condition requires the existence of two vector fields which generate a common symmetry of the initial system and its corresponding linear spectral problem. This means that these two fields have to be group-related and we determine an explicit form of this relation. It provides a criterion for the selection of symmetries suitable for use in the Fokas–Gel’fand formula. We include some examples illustrating its application. (paper)
A physics-based algorithm for retrieving land-surface emissivity and temperature from EOS/MODIS data
International Nuclear Information System (INIS)
Wan, Z.; Li, Z.L.
1997-01-01
The authors have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NEΔT) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4--0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10--12.5 microm IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2--3 K
Hybrid of Natural Element Method (NEM with Genetic Algorithm (GA to find critical slip surface
Directory of Open Access Journals (Sweden)
Shahriar Shahrokhabadi
2014-06-01
Full Text Available One of the most important issues in geotechnical engineering is the slope stability analysis for determination of the factor of safety and the probable slip surface. Finite Element Method (FEM is well suited for numerical study of advanced geotechnical problems. However, mesh requirements of FEM creates some difficulties for solution processing in certain problems. Recently, motivated by these limitations, several new Meshfree methods such as Natural Element Method (NEM have been used to analyze engineering problems. This paper presents advantages of using NEM in 2D slope stability analysis and Genetic Algorithm (GA optimization to determine the probable slip surface and the related factor of safety. The stress field is produced under plane strain condition using natural element formulation to simulate material behavior analysis utilized in conjunction with a conventional limit equilibrium method. In order to justify the preciseness and convergence of the proposed method, two kinds of examples, homogenous and non-homogenous, are conducted and results are compared with FEM and conventional limit equilibrium methods. The results show the robustness of the NEM in slope stability analysis.
Thermal weapon sights with integrated fire control computers: algorithms and experiences
Rothe, Hendrik; Graswald, Markus; Breiter, Rainer
2008-04-01
The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.
Departure Queue Prediction for Strategic and Tactical Surface Scheduler Integration
Zelinski, Shannon; Windhorst, Robert
2016-01-01
A departure metering concept to be demonstrated at Charlotte Douglas International Airport (CLT) will integrate strategic and tactical surface scheduling components to enable the respective collaborative decision making and improved efficiency benefits these two methods of scheduling provide. This study analyzes the effect of tactical scheduling on strategic scheduler predictability. Strategic queue predictions and target gate pushback times to achieve a desired queue length are compared between fast time simulations of CLT surface operations with and without tactical scheduling. The use of variable departure rates as a strategic scheduler input was shown to substantially improve queue predictions over static departure rates. With target queue length calibration, the strategic scheduler can be tuned to produce average delays within one minute of the tactical scheduler. However, root mean square differences between strategic and tactical delays were between 12 and 15 minutes due to the different methods the strategic and tactical schedulers use to predict takeoff times and generate gate pushback clearances. This demonstrates how difficult it is for the strategic scheduler to predict tactical scheduler assigned gate delays on an individual flight basis as the tactical scheduler adjusts departure sequence to accommodate arrival interactions. Strategic/tactical scheduler compatibility may be improved by providing more arrival information to the strategic scheduler and stabilizing tactical scheduler changes to runway sequence in response to arrivals.
Adaptive Fuzzy Integral Sliding-Mode Regulator for Induction Motor Using Nonlinear Sliding Surface
Yong-Kun Lu
2015-01-01
An adaptive fuzzy integral sliding-mode controller using nonlinear sliding surface is designed for the speed regulator of a field-oriented induction motor drive in this paper. Combining the conventional integral sliding surface with fractional-order integral, a nonlinear sliding surface is proposed for the integral sliding-mode speed control, which can overcome the windup problem and the convergence speed problem. An adaptive fuzzy control term is utilized to approximate the uncertainty. The ...
Directory of Open Access Journals (Sweden)
Sanjiv Kumar
Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.
Directory of Open Access Journals (Sweden)
V. N. Trubilin
2016-01-01
problem of modern ophthalmology.Part 1 — Trubilin VN, Poluninа EG, Kurenkov VV, Kapkova SG, Markova EY, Therapeutic eyelids hygiene in the algorithms of prevention and treatment of ocular surface diseases. Ophthalmology in Russia. 2016;13(2:122–127 doi: 10.18008/1816–5095– 2016–2–122–127
Wang, W.; Wang, Y.; Hashimoto, H.; Li, S.; Takenaka, H.; Higuchi, A.; Lyapustin, A.; Nemani, R. R.
2017-12-01
The latest generation of geostationary satellite sensors, including the GOES-16/ABI and the Himawari 8/AHI, provide exciting capability to monitor land surface at very high temporal resolutions (5-15 minute intervals) and with spatial and spectral characteristics that mimic the Earth Observing System flagship MODIS. However, geostationary data feature changing sun angles at constant view geometry, which is almost reciprocal to sun-synchronous observations. Such a challenge needs to be carefully addressed before one can exploit the full potential of the new sources of data. Here we take on this challenge with Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, recently developed for accurate and globally robust applications like the MODIS Collection 6 re-processing. MAIAC first grids the top-of-atmosphere measurements to a fixed grid so that the spectral and physical signatures of each grid cell are stacked ("remembered") over time and used to dramatically improve cloud/shadow/snow detection, which is by far the dominant error source in the remote sensing. It also exploits the changing sun-view geometry of the geostationary sensor to characterize surface BRDF with augmented angular resolution for accurate aerosol retrievals and atmospheric correction. The high temporal resolutions of the geostationary data indeed make the BRDF retrieval much simpler and more robust as compared with sun-synchronous sensors such as MODIS. As a prototype test for the geostationary-data processing pipeline on NASA Earth Exchange (GEONEX), we apply MAIAC to process 18 months of data from Himawari 8/AHI over Australia. We generate a suite of test results, including the input TOA reflectance and the output cloud mask, aerosol optical depth (AOD), and the atmospherically-corrected surface reflectance for a variety of geographic locations, terrain, and land cover types. Comparison with MODIS data indicates a general agreement between the retrieved surface reflectance
Energy Technology Data Exchange (ETDEWEB)
Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.; Hirata, Christopher M., E-mail: fang.307@osu.edu, E-mail: blazek@berkeley.edu, E-mail: mcewen.24@osu.edu, E-mail: hirata.10@osu.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)
2017-02-01
Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies in the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.
A New Algorithm for ABS/GPS Integration Based on Fuzzy-Logic in Vehicle Navigation System
Directory of Open Access Journals (Sweden)
Ali Amin Zadeh
2011-10-01
Full Text Available GPS based vehicle navigation systems have difficulties in tracking vehicles in urban canyons due to poor satellite availability. ABS (Antilock Brake System Navigation System consists of self-contained optical encoders mounted on vehicle wheels that can continuously provide accurate short-term positioning information. In this paper, a new concept regarding GPS/ABS integration, based on Fuzzy Logic is presented. The proposed algorithm is used to identify GPS position accuracy based on environment and vehicle dynamic knowledge. The GPS is used as reference during the time it is in a good condition and replaced by ABS positioning system when GPS information is unreliable. We compare our proposed algorithm with other common algorithm in real environment. Our results show that the proposed algorithm can significantly improve the stability and reliability of ABS/GPS navigation system.
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at
Analysis of Leaky Modes in Photonic Crystal Fibers Using the Surface Integral Equation Method
Directory of Open Access Journals (Sweden)
Jung-Sheng Chiang
2018-04-01
Full Text Available A fully vectorial algorithm based on the surface integral equation method for the modelling of leaky modes in photonic crystal fibers (PCFs by solely solving the complex propagation constants of characteristic equations is presented. It can be used for calculations of the complex effective index and confinement losses of photonic crystal fibers. As complex root examination is the key technique in the solution, the new algorithm which possesses this technique can be used to solve the leaky modes of photonic crystal fibers. The leaky modes of solid-core PCFs with a hexagonal lattice of circular air-holes are reported and discussed. The simulation results indicate how the confinement loss by the imaginary part of the effective index changes with air-hole size, the number of rings of air-holes, and wavelength. Confinement loss reductions can be realized by increasing the air-hole size and the number of air-holes. The results show that the confinement loss rises with wavelength, implying that the light leaks more easily for longer wavelengths; meanwhile, the losses are decreased significantly as the air-hole size d/Λ is increased.
The Novel Artificial Intelligence Based Sub-Surface Inclusion Detection Device and Algorithm
Directory of Open Access Journals (Sweden)
Jong-Ha LEE
2017-05-01
Full Text Available We design, implement, and test a novel tactile elasticity imaging sensor to detect the elastic modulus of a contacted object. Emulating a human finger, a multi-layer polydimethylsiloxane waveguide has been fabricated as the sensing probe. The light is illuminated under the critical angle to totally reflect within the flexible and transparent waveguide. When a waveguide is compressed by an object, the contact area of the waveguide deforms and causes the light to scatter. The scattered light is captured by a high resolution camera. Multiple images are taken from slightly different loading values. The distributed forces have been estimated using the integrated pixel values of diffused lights. The displacements of the contacted object deformation have been estimated by matching the series of tactile images. For this purpose, a novel pattern matching algorithm is developed. The salient feature of this sensor is that it is capable of measuring the absolute elastic modulus value of soft materials without additional measurement units. The measurements were validated by comparing the measured elasticity of the commercial rubber samples with the known elasticity. The evaluation results showed that this type of sensor can measure elasticity within ±5.38 %.
Clarke, R.; Lintereur, L.; Bahm, C.
2016-01-01
A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.
Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method
Directory of Open Access Journals (Sweden)
Muneki Yasuda
2018-04-01
Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics
Farhat, Charbel
1997-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.
Integrated optical isolators using magnetic surface plasmon (Presentation Recording)
Shimizu, Hiromasa; Kaihara, Terunori; Umetsu, Saori; Hosoda, Masashi
2015-09-01
Optical isolators are one of the essential components to protect semiconductor laser diodes (LDs) from backward reflected light in integrated optics. In order to realize optical isolators, nonreciprocal propagation of light is necessary, which can be realized by magnetic materials. Semiconductor optical isolators have been strongly desired on Si and III/V waveguides. We have developed semiconductor optical isolators based on nonreciprocal loss owing to transverse magneto-optic Kerr effect, where the ferromagnetic metals are deposited on semiconductor optical waveguides1). Use of surface plasmon polariton at the interface of ferromagnetic metal and insulator leads to stronger optical confinement and magneto-optic effect. It is possible to modulate the optical confinement by changing the magnetic field direction, thus optical isolator operation is proposed2, 3). We have investigated surface plasmons at the interfaces between ferrimagnetic garnet/gold film, and applications to waveguide optical isolators. We assumed waveguides composed of Au/Si(38.63nm)/Ce:YIG(1700nm)/Si(220nm)/Si , and calculated the coupling lengths between Au/Si(38.63nm)/Ce:YIG plasmonic waveguide and Ce:YIG/Si(220nm)/Si waveguide for transversely magnetized Ce:YIG with forward and backward directions. The coupling length was calculated to 232.1um for backward propagating light. On the other hand, the coupling was not complete, and the length was calculated to 175.5um. The optical isolation by using the nonreciprocal coupling and propagation loss was calculated to be 43.7dB when the length of plasmonic waveguide is 700um. 1) H. Shimizu et al., J. Lightwave Technol. 24, 38 (2006). 2) V. Zayets et al., Materials, 5, 857-871 (2012). 3) J. Montoya, et al, J. Appl. Phys. 106, 023108, (2009).
Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G
2012-09-01
This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.
Tang, Hong Yu; Ye, Huai Yu; Chen, Xian Ping; Qian, Cheng; Fan, Xue Jun; Zhang, G.Q.
2017-01-01
In this paper, the heat transfer performance of the multi-chip (MC) LED module is investigated numerically by using a general analytical solution. The configuration of the module is optimized with genetic algorithm (GA) combined with a response surface methodology. The space between chips, the
Conformal fields. From Riemann surfaces to integrable hierarchies
International Nuclear Information System (INIS)
Semikhatov, A.M.
1991-01-01
I discuss the idea of translating ingredients of conformal field theory into the language of hierarchies of integrable differential equations. Primary conformal fields are mapped into (differential or matrix) operators living on the phase space of the hierarchy, whereas operator insertions of, e.g., a current or the energy-momentum tensor, become certain vector fields on the phase space and thus acquire a meaning independent of a given Riemann surface. A number of similarities are observed between the structures arising on the hierarchy and those of the theory on the world-sheet. In particular, there is an analogue of the operator product algebra with the Cauchy kernel replaced by its 'off-shell' hierarchy version. Also, hierarchy analogues of certain operator insertions admit two (equivalent, but distinct) forms, resembling the 'bosonized' and 'fermionized' versions respectively. As an application, I obtain a useful reformulation of the Virasoro constraints of the type that arise in matrix models, as a system of equations on dressing (or Lax) operators (rather than correlation functions, i.e., residues or traces). This also suggests an interpretation in terms of a 2D topological field theory, which might be extended to a correspondence between Virasoro-constrained hierarchies and topological theories. (orig.)
Process for integrating surface drainage constraints on mine planning
Energy Technology Data Exchange (ETDEWEB)
Sawatsky, L.F; Ade, F.L.; McDonald, D.M.; Pullman, B.J. [Golder Associates Ltd., Calgary, AB (Canada)
2009-07-01
Surface drainage for mine closures must be considered during all phases of mine planning and design in order to minimize environmental impacts and reduce costs. This paper discussed methods of integrating mine drainage criteria and associated mine planning constraints into the mine planning process. Drainage constraints included stream diversions; fish compensation channels; collection receptacles for the re-use of process water; separation of closed circuit water from fresh water; and the provision of storage ponds. The geomorphic approach replicated the ability of natural channels to respond to local and regional changes in hydrology as well as channel disturbances from extreme flood events, sedimentation, debris, ice jams, and beaver activity. The approach was designed to enable a sustainable system and provide conveyance capacity for extreme floods without spillage to adjacent watersheds. Channel dimensions, bank and bed materials, sediment loads, bed material supplies and the hydrologic conditions of the analogue stream were considered. Hydrologic analyses were conducted to determine design flood flow. Channel routes, valley slopes, sinuosity, width, and depth were established. It was concluded that by incorporating the geomorphic technique, mine operators and designers can construct self-sustaining drainage systems that require little or no maintenance in the long-term. 7 refs.
Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.
2012-01-01
An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.
Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone
Energy Technology Data Exchange (ETDEWEB)
Alfonsi, Andrea [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Sen, Ramazan Sonat [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Laboratory (INL), Idaho Falls, ID (United States)
2015-07-01
The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd
2015-09-15
This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.
Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore
2017-12-01
In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.
Integrated CLOS and PN Guidance for Increased Effectiveness of Surface to Air Missiles
Directory of Open Access Journals (Sweden)
Binte Fatima Tuz ZAHRA
2017-06-01
Full Text Available In this paper, a novel approach has been presented to integrate command to line-of-sight (CLOS guidance and proportional navigation (PN guidance in order to reduce miss distance and to increase the effectiveness of surface to air missiles. Initially a comparison of command to line-of-sight guidance and proportional navigation has been presented. Miss distance, variation of angle-of-attack, normal and lateral accelerations and error of missile flight path from direct line-of-sight have been used as noteworthy criteria for comparison of the two guidance laws. Following this comparison a new approach has been proposed for determining the most suitable guidance gains in order to minimize miss distance and improve accuracy of the missile in delivering the warhead, while using CLOS guidance. This proposed technique is based on constrained nonlinear minimization to optimize the guidance gains. CLOS guidance has a further limitation of significant increase in normal and lateral acceleration demands during the terminal phase of missile flight. Furthermore, at large elevation angles, the required angle-of-attack during the terminal phase increases beyond design specifications. Subsequently, a missile with optical sensors only and following just the CLOS guidance has less likelihood to hit high speed targets beyond 45º in elevation plane. A novel approach has thus been proposed to overcome such limitations of CLOS-only guidance for surface to air missiles. In this approach, an integrated guidance algorithm has been proposed whereby the initial guidance law during rocket motor burnout phase remains CLOS, whereas immediately after this phase, the guidance law is automatically switched to PN guidance. This integrated approach has not only resulted in slight increase in range of the missile but also has significantly improved its likelihood to hit targets beyond 30 degrees in elevation plane, thus successfully overcoming various limitations of CLOS
Uysal, Ismail Enes
2016-08-09
Transient electromagnetic interactions on plasmonic nanostructures are analyzed by solving the Poggio-Miller-Chan-Harrington-Wu-Tsai (PMCHWT) surface integral equation (SIE). Equivalent (unknown) electric and magnetic current densities, which are introduced on the surfaces of the nanostructures, are expanded using Rao-Wilton-Glisson and polynomial basis functions in space and time, respectively. Inserting this expansion into the PMCHWT-SIE and Galerkin testing the resulting equation at discrete times yield a system of equations that is solved for the current expansion coefficients by a marching on-in-time (MOT) scheme. The resulting MOT-PMCHWT-SIE solver calls for computation of additional convolutions between the temporal basis function and the plasmonic medium\\'s permittivity and Green function. This computation is carried out with almost no additional cost and without changing the computational complexity of the solver. Time-domain samples of the permittivity and the Green function required by these convolutions are obtained from their frequency-domain samples using a fast relaxed vector fitting algorithm. Numerical results demonstrate the accuracy and applicability of the proposed MOT-PMCHWT solver. © 2016 Optical Society of America.
An efficient and robust algorithm for parallel groupwise registration of bone surfaces
van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.
2012-01-01
In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm
Energy Technology Data Exchange (ETDEWEB)
Tumuluru, Jaya
2013-01-10
Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio, barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated
A parallel row-based algorithm for standard cell placement with integrated error control
Sargent, Jeff S.; Banerjee, Prith
1989-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.
Dynamic Water Surface Detection Algorithm Applied on PROBA-V Multispectral Data
Directory of Open Access Journals (Sweden)
Luc Bertels
2016-12-01
Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.
Substrate integrated ferrite phase shifters and active frequency selective surfaces
International Nuclear Information System (INIS)
Cahill, B.M.
2002-01-01
There are two distinct parts to this thesis; the first investigates the use of ferrite tiles in the construction of printed phase shifting transmission lines, culminating in the design of two compact electromagnetic controlled beam steered patch and slot antenna arrays. The second part investigates the use of active frequency selective surfaces (AFSS), which are later used to cover a uPVC constructed enclosure. Field intensity measurements are taken from within the enclosure to determine the dynamic screening effectiveness. Trans Tech G-350 Ferrite is investigated to determine its application in printed microstrip and stripline phase shifting transmission lines. 50-Ohm transmission lines are constructed using the ferrite tile and interfaced to Rogers RT Duroid 5870 substrate. Scattering parameter measurements are made under the application of variable magnetic fields to the ferrite. Later, two types of planar microwave beam steering antennas are constructed. The first uses the ferrites integrated into the Duroid as microstrip lines with 3 patch antennas as the radiating elements. The second uses stripline transmission lines, with slot antennas as the radiating sources etched into the ground plane of the triplate. Beam steering is achieved by the application of an external electromagnet. An AFSS is constructed by the interposition of PIN diodes into a dipole FSS array. Transmission response measurements are then made for various angles of electromagnetic wave incidence. Two states of operation exist: when a current is passed through the diodes and when the diodes are switched off. These two states form a high pass and band stop space filter respectively. An enclosure covered with the AFSS is constructed and externally illuminated in the range 2.0 - 2.8GHz. A probe antenna inside the enclosure positioned at various locations through out the volume is used to establish the effective screening action of the AFSS in 3 dimensional space. (author)
Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong
2018-03-01
With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.
Directory of Open Access Journals (Sweden)
Fei Wang
2015-04-01
Full Text Available The successful launch of the Landsat 8 satellite with two thermal infrared bands on February 11, 2013, for continuous Earth observation provided another opportunity for remote sensing of land surface temperature (LST. However, calibration notices issued by the United States Geological Survey (USGS indicated that data from the Landsat 8 Thermal Infrared Sensor (TIRS Band 11 have large uncertainty and suggested using TIRS Band 10 data as a single spectral band for LST estimation. In this study, we presented an improved mono-window (IMW algorithm for LST retrieval from the Landsat 8 TIRS Band 10 data. Three essential parameters (ground emissivity, atmospheric transmittance and effective mean atmospheric temperature were required for the IMW algorithm to retrieve LST. A new method was proposed to estimate the parameter of effective mean atmospheric temperature from local meteorological data. The other two essential parameters could be both estimated through the so-called land cover approach. Sensitivity analysis conducted for the IMW algorithm revealed that the possible error in estimating the required atmospheric water vapor content has the most significant impact on the probable LST estimation error. Under moderate errors in both water vapor content and ground emissivity, the algorithm had an accuracy of ~1.4 K for LST retrieval. Validation of the IMW algorithm using the simulated datasets for various situations indicated that the LST difference between the retrieved and the simulated ones was 0.67 K on average, with an RMSE of 0.43 K. Comparison of our IMW algorithm with the single-channel (SC algorithm for three main atmosphere profiles indicated that the average error and RMSE of the IMW algorithm were −0.05 K and 0.84 K, respectively, which were less than the −2.86 K and 1.05 K of the SC algorithm. Application of the IMW algorithm to Nanjing and its vicinity in east China resulted in a reasonable LST estimation for the region. Spatial
Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu
2015-12-01
For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
International Nuclear Information System (INIS)
Dutta, Rajdeep; Ganguli, Ranjan; Mani, V
2011-01-01
Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures
Smartphone-Based Indoor Integrated WiFi/MEMS Positioning Algorithm in a Multi-Floor Environment
Directory of Open Access Journals (Sweden)
Zengshan Tian
2015-03-01
Full Text Available Indoor positioning in a multi-floor environment by using a smartphone is considered in this paper. The positioning accuracy and robustness of WiFi fingerprinting-based positioning are limited due to the unexpected variation of WiFi measurements between floors. On this basis, we propose a novel smartphone-based integrated WiFi/MEMS positioning algorithm based on the robust extended Kalman filter (EKF. The proposed algorithm first relies on the gait detection approach and quaternion algorithm to estimate the velocity and heading angles of the target. Second, the velocity and heading angles, together with the results of WiFi fingerprinting-based positioning, are considered as the input of the robust EKF for the sake of conducting two-dimensional (2D positioning. Third, the proposed algorithm calculates the height of the target by using the real-time recorded barometer and geographic data. Finally, the experimental results show that the proposed algorithm achieves the positioning accuracy with root mean square errors (RMSEs less than 1 m in an actual multi-floor environment.
Directory of Open Access Journals (Sweden)
Khalid Qaraqe
2008-10-01
Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.
Directory of Open Access Journals (Sweden)
Kim Jang-Sub
2008-01-01
Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.
Testing and tuning symplectic integrators for the hybrid Monte Carlo algorithm in lattice QCD
International Nuclear Information System (INIS)
Takaishi, Tetsuya; Forcrand, Philippe de
2006-01-01
We examine a new second-order integrator recently found by Omelyan et al. The integration error of the new integrator measured in the root mean square of the energy difference, 2 > 1/2 , is about 10 times smaller than that of the standard second-order leapfrog (2LF) integrator. As a result, the step size of the new integrator can be made about three times larger. Taking into account a factor 2 increase in cost, the new integrator is about 50% more efficient than the 2LF integrator. Integrating over positions first, then momenta, is slightly more advantageous than the reverse. Further parameter tuning is possible. We find that the optimal parameter for the new integrator is slightly different from the value obtained by Omelyan et al., and depends on the simulation parameters. This integrator could also be advantageous for the Trotter-Suzuki decomposition in quantum Monte Carlo
International Nuclear Information System (INIS)
Saidullah, S.; Shah, B.
2016-01-01
Background: To ablate accessory pathway successfully and conveniently, accurate localization of the pathway is needed. Electrophysiologists use different algorithms before taking the patients to the electrophysiology (EP) laboratory to plan the intervention accordingly. In this study, we used Arruda algorithm to locate the accessory pathway. The objective of the study was to determine the accuracy of the Arruda algorithm for locating the pathway on surface ECG. Methods: It was a cross-sectional observational study conducted from January 2014 to January 2016 in the electrophysiology department of Hayat Abad Medical Complex Peshawar Pakistan. A total of fifty nine (n=59) consecutive patients of both genders between age 14-60 years presented with WPW syndrome (Symptomatic tachycardia with delta wave on surface ECG) were included in the study. Patient's electrocardiogram (ECG) before taking patients to laboratory was analysed on Arruda algorithm. Standard four wires protocol was used for EP study before ablation. Once the findings were confirmed the pathway was ablated as per standard guidelines. Results: A total of fifty nine (n=59) patients between the age 14-60 years were included in the study. Cumulative mean age was 31.5 years ± 12.5 SD. There were 56.4% (n=31) males with mean age 28.2 years ± 10.2 SD and 43.6% (n=24) were females with mean age 35.9 years ± 14.0 SD. Arruda algorithm was found to be accurate in predicting the exact accessory pathway (AP) in 83.6% (n=46) cases. Among all inaccurate predictions (n=9), Arruda inaccurately predicted two third (n=6; 66.7%) pathways towards right side (right posteroseptal, right posterolateral and right antrolateral). Conclusion: Arruda algorithm was found highly accurate in predicting accessory pathway before ablation. (author)
An Integrated Start-Up Method for Pumped Storage Units Based on a Novel Artificial Sheep Algorithm
Directory of Open Access Journals (Sweden)
Zanbin Wang
2018-01-01
Full Text Available Pumped storage units (PSUs are an important storage tool for power systems containing large-scale renewable energy, and the merit of rapid start-up enable PSUs to modulate and stabilize the power system. In this paper, PSU start-up strategies have been studied and a new integrated start-up method has been proposed for the purpose of achieving swift and smooth start-up. A two-phase closed-loop startup strategy, composed of switching Proportion Integration (PI and Proportion Integration Differentiation (PID controller is designed, and an integrated optimization scheme is proposed for a synchronous optimization of the parameters in the strategy. To enhance the optimization performance, a novel meta-heuristic called Artificial Sheep Algorithm (ASA is proposed and applied to solve the optimization task after a sufficient verification with seven popular meta-heuristic algorithms and 13 typical benchmark functions. Simulation model has been built for a China’s PSU and comparative experiments are conducted to evaluate the proposed integrated method. Results show that the start-up performance could be significantly improved on both indices on overshoot and start-up, and up to 34%-time consumption has been reduced under different working condition. The significant improvements on start-up of PSU is interesting and meaning for further application on real unit.
Directory of Open Access Journals (Sweden)
Wojciechowski Szymon
2017-01-01
Full Text Available The paper is focused on the evaluation of surface integrity formed during turning of Inconel 718 with the application of various laser assistance strategies. The primary objective of the work was to determine the relations between the applied machining strategy and the obtained surface integrity, in order to select the effective cutting conditions allowing the obtainment of high surface quality. The carried out experiment included the machining of Inconel 718 in the conventional turning conditions, as well as during the continuous laser assisted machining and sequential laser assistance. The surface integrity was evaluated by the measurements of machined surface topographies, microstructures and the microhardness. Results revealed that surface integrity of Inconel 718 is strongly affected by the selected machining strategy. The significant improvement of the surface roughness formed during machining of Inconel 718, can be reached by the application of simultaneous laser heating and cutting (LAM.
Wavelet based edge detection algorithm for web surface inspection of coated board web
Energy Technology Data Exchange (ETDEWEB)
Barjaktarovic, M; Petricevic, S, E-mail: slobodan@etf.bg.ac.r [School of Electrical Engineering, Bulevar Kralja Aleksandra 73, 11000 Belgrade (Serbia)
2010-07-15
This paper presents significant improvement of the already installed vision system. System was designed for real time coated board inspection. The improvement is achieved with development of a new algorithm for edge detection. The algorithm is based on the redundant (undecimated) wavelet transform. Compared to the existing algorithm better delineation of edges is achieved. This yields to better defect detection probability and more accurate geometrical classification, which will provide additional reduction of waste. Also, algorithm will provide detailed classification and more reliably tracking of defects. This improvement requires minimal changes in processing hardware, only a replacement of the graphic card would be needed, adding only negligibly to the system cost. Other changes are accomplished entirely in the image processing software.
Effect of different machining processes on the tool surface integrity and fatigue life
Energy Technology Data Exchange (ETDEWEB)
Cao, Chuan Liang [College of Mechanical and Electrical Engineering, Nanchang University, Nanchang (China); Zhang, Xianglin [School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan (China)
2016-08-15
Ultra-precision grinding, wire-cut electro discharge machining and lapping are often used to machine the tools in fine blanking industry. And the surface integrity from these machining processes causes great concerns in the research field. To study the effect of processing surface integrity on the fine blanking tool life, the surface integrity of different tool materials under different processing conditions and its influence on fatigue life were thoroughly analyzed in the present study. The result shows that the surface integrity of different materials was quite different on the same processing condition. For the same tool material, the surface integrity on varying processing conditions was quite different too and deeply influenced the fatigue life.
Du, Jia; Younes, Laurent; Qiu, Anqi
2011-01-01
This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722
Directory of Open Access Journals (Sweden)
Daniel Wallner
2010-10-01
Full Text Available The present paper investigates an approach to integrate active and passive safety systems of passenger cars. Worldwide, the introduction of Integrated Safety Systems and Advanced Driver Assistance Systems (ADAS is considered to continue the today
Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S
2017-11-01
Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.
Assessment of Wind Turbine Structural Integrity using Response Surface Methodology
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Svenningsen, Lasse; Moser, Wolfgang
2016-01-01
Highlights •A new approach to assessment of site specific wind turbine loads is proposed. •The approach can be applied in both fatigue and ultimate limit state. •Two different response surface methodologies have been investigated. •The model uncertainty introduced by the response surfaces...
International Nuclear Information System (INIS)
Haug, E.; Rouvray, A.L. de; Nguyen, Q.S.
1977-01-01
This study proposes a general nonlinear algorithm stability criterion; it introduces a nonlinear algorithm, easily implemented in existing incremental/iterative codes, and it applies the new scheme beneficially to problems of linear elastic dynamic snap buckling. Based on the concept of energy conservation, the paper outlines an algorithm which degenerates into the trapezoidal rule, if applied to linear systems. The new algorithm conserves energy in systems having elastic potentials up to the fourth order in the displacements. This is true in the important case of nonlinear total Lagrange formulations where linear elastic material properties are substituted. The scheme is easily implemented in existing incremental-iterative codes with provisions for stiffness reformation and containing the basic Newmark scheme. Numerical analyses of dynamic stability can be dramatically sensitive to amplitude errors, because damping algorithms may mask, and overestimating schemes may numerically trigger, the physical instability. The newly proposed scheme has been applied with larger time steps and less cost to the dynamic snap buckling of simple one and multi degree-of-freedom structures for various initial conditions
Li, Xiaofan; Nie, Qing
2009-01-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratu...
Testing and tuning new symplectic integrators for Hybrid Monte Carlo algorithm in lattice QCD
Takaishi, T; Takaishi, Tetsuya; Forcrand, Philippe de
2006-01-01
We examine a new 2nd order integrator recently found by Omelyan et al. The integration error of the new integrator measured in the root mean square of the energy difference, $\\bra\\Delta H^2\\ket^{1/2}$, is about 10 times smaller than that of the standard 2nd order leapfrog (2LF) integrator. As a result, the step size of the new integrator can be made about three times larger. Taking into account a factor 2 increase in cost, the new integrator is about 50% more efficient than the 2LF integrator. Integrating over positions first, then momenta, is slightly more advantageous than the reverse. Further parameter tuning is possible. We find that the optimal parameter for the new integrator is slightly different from the value obtained by Omelyan et al., and depends on the simulation parameters. This integrator, together with a new 4th order integrator, could also be advantageous for the Trotter-Suzuki decomposition in Quantum Monte Carlo.
International Nuclear Information System (INIS)
Kurt, Ünal
2014-01-01
The location selection for nuclear power plant (NPP) is a strategic decision, which has significant impact on the economic operation of the plant and sustainable development of the region. This paper proposes fuzzy TOPSIS and generalized Choquet fuzzy integral algorithm for evaluation and selection of optimal locations for NPP in Turkey. Many sub-criteria such as geological, social, touristic, transportation abilities, cooling water capacity and nearest to consumptions markets are taken into account. Among the evaluated locations, according to generalized Choquet fuzzy integral method, Inceburun–Sinop was selected as a study site due to its highest performance and meeting most of the investigated criteria. The Inceburun-Sinop is selected by generalized Choquet fuzzy integral and fuzzy TOPSIS Iğneada–Kırklareli took place in the first turn. The Mersin–Akkuyu is not selected in both methods. (author)
Energy Technology Data Exchange (ETDEWEB)
Azadeh, A; Seraj, O [Department of Industrial Engineering and Research Institute of Energy Management and Planning, Center of Excellence for Intelligent-Based Experimental Mechanics, College of Engineering, University of Tehran, P.O. Box 11365-4563 (Iran); Saberi, M [Department of Industrial Engineering, University of Tafresh (Iran); Institute for Digital Ecosystems and Business Intelligence, Curtin University of Technology, Perth (Australia)
2010-06-15
This study presents an integrated fuzzy regression and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy regression (FR) or time series and the integrated algorithm could be an ideal substitute for such cases. At First, preferred Time series model is selected from linear or nonlinear models. For this, after selecting preferred Auto Regression Moving Average (ARMA) model, Mcleod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, the preferred nonlinear model is selected and defined as preferred time series model. At last, the preferred model from fuzzy regression and time series model is selected by the Granger-Newbold. Also, the impact of data preprocessing on the fuzzy regression performance is considered. Monthly electricity consumption of Iran from March 1994 to January 2005 is considered as the case of this study. The superiority of the proposed algorithm is shown by comparing its results with other intelligent tools such as Genetic Algorithm (GA) and Artificial Neural Network (ANN). (author)
Energy Technology Data Exchange (ETDEWEB)
Jenq, B.C.
1986-01-01
The performance evaluation of integrated concurrency-control and recovery mechanisms for distributed data base systems is studied using a distributed testbed system. In addition, a queueing network model was developed to analyze the two phase locking scheme in the distributed testbed system. The combination of testbed measurement and analytical modeling provides an effective tool for understanding the performance of integrated concurrency control and recovery algorithms in distributed database systems. The design and implementation of the distributed testbed system, CARAT, are presented. The concurrency control and recovery algorithms implemented in CARAT include: a two phase locking scheme with distributed deadlock detection, a distributed version of optimistic approach, before-image and after-image journaling mechanisms for transaction recovery, and a two-phase commit protocol. Many performance measurements were conducted using a variety of workloads. A queueing network model is developed to analyze the performance of the CARAT system using the two-phase locking scheme with before-image journaling. The combination of testbed measurements and analytical modeling provides significant improvements in understanding the performance impacts of the concurrency control and recovery algorithms in distributed database systems.
Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms
Wheaton, Ira M.
2011-01-01
The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.
A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-01-01
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224
Integrability of Liouville system on high genus Riemann surface: Pt. 1
International Nuclear Information System (INIS)
Chen Yixin; Gao Hongbo
1992-01-01
By using the theory of uniformization of Riemann-surfaces, we study properties of the Liouville equation and its general solution on a Riemann surface of genus g>1. After obtaining Hamiltonian formalism in terms of free fields and calculating classical exchange matrices, we prove the classical integrability of Liouville system on high genus Riemann surface
Flamelet Surface Density and Burning Rate Integral in Premixed Combustion
National Research Council Canada - National Science Library
Gouldin, F
1999-01-01
We have developed, tested and applied in V-flames and a spark ignition engine a new experimental method, crossed-plane laser imaging, for measuring flamelet surface normals in premixed turbulent flames...
Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov
2015-08-01
Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.
International Nuclear Information System (INIS)
Gao Min; Zhong Xia; Huang Shutao
2008-01-01
A multi-source database for high-level radioactive waste geological disposal, aims to promote the information process of the geological of HLW. In the periods of the multi-dimensional and multi-source and the integration of information and applications, it also relates to computer software and hardware, the paper preliminary analysises the data resources Beishan area, Gansu Province. The paper introduces a theory based on GIS technology and methods and open source code GDAL application, at the same time, it discusses the technical methods how to finish the application of the Quadtree algorithm in the area of information resources management system, fully sharing, rapid retrieval and so on. A more detailed description of the characteristics of existing data resources, space-related data retrieval algorithm theory, programming design and implementation of ideas are showed in the paper. (authors)
Chen, Yung-Yue
2018-05-08
Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.
Directory of Open Access Journals (Sweden)
Yung-Yue Chen
2018-05-01
Full Text Available Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H2 estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.
Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...
Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.
Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma
2014-12-08
In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Directory of Open Access Journals (Sweden)
Raul Correal
2016-11-01
Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.
Xu, Z N; Wang, S Y
2015-02-01
To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.
Cox, Stephen J.; Stackhouse, Paul W., Jr.; Gupta, Shashi K.; Mikovitz, J. Colleen; Zhang, Taiping
2016-01-01
The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current release 3.0 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number will allow SRB a higher resolution gridded product (e.g. 0.5 degree), as well as the production of pixel-level fluxes. In addition to the input data improvements, several important algorithm improvements have been made. Most notable has been the adaptation of Angular Distribution Models (ADMs) from CERES to improve the initial calculation of shortwave TOA fluxes, from which the surface flux calculations follow. Other key input improvements include a detailed aerosol history using the Max Planck Institut Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data, the various other improved input data sets and the incorporation of many additional internal SRB model improvements. As of the time of abstract submission, results from 2007 have been produced with ISCCP H availability the limiting factor. More SRB data will be produced as ISCCP reprocessing continues. The SRB data produced will be released as part of the Release 4.0 Integrated Product, recognizing the interdependence of the radiative fluxes with other GEWEX products providing estimates of the Earth's global water and energy cycle (I.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).
Surface light scattering: integrated technology and signal processing
DEFF Research Database (Denmark)
Lading, L.; Dam-Hansen, C.; Rasmussen, E.
1997-01-01
systems representing increasing levels of integration are considered. It is demonstrated that efficient signal and data processing can be achieved by evaluation of the statistics of the derivative of the instantaneous phase of the detector signal. (C) 1997 Optical Society of America....
An Integrated Software Suite for Surface-based Analyses of Cerebral Cortex
Van Essen, David C.; Drury, Heather A.; Dickson, James; Harwell, John; Hanlon, Donna; Anderson, Charles H.
2001-01-01
The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database. PMID:11522765
An integrated software suite for surface-based analyses of cerebral cortex
Van Essen, D. C.; Drury, H. A.; Dickson, J.; Harwell, J.; Hanlon, D.; Anderson, C. H.
2001-01-01
The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database.
Meandered-line antenna with integrated high-impedance surface.
Energy Technology Data Exchange (ETDEWEB)
Forman, Michael A.
2010-09-01
A reduced-volume antenna composed of a meandered-line dipole antenna over a finite-width, high-impedance surface is presented. The structure is novel in that the high-impedance surface is implemented with four Sievenpiper via-mushroom unit cells, whose area is optimized to match the meandered-line dipole antenna. The result is an antenna similar in performance to patch antenna but one fourth the area that can be deployed directly on the surface of a conductor. Simulations demonstrate a 3.5 cm ({lambda}/4) square antenna with a bandwidth of 4% and a gain of 4.8 dBi at 2.5 GHz.
Directory of Open Access Journals (Sweden)
Chang-Seok Park
2017-09-01
Full Text Available This paper presents a torque error compensation algorithm for a surface mounted permanent magnet synchronous machine (SPMSM through real time permanent magnet (PM flux linkage estimation at various temperature conditions from medium to rated speed. As known, the PM flux linkage in SPMSMs varies with the thermal conditions. Since a maximum torque per ampere look up table, a control method used for copper loss minimization, is developed based on estimated PM flux linkage, variation of PM flux linkage results in undesired torque development of SPMSM drives. In this paper, PM flux linkage is estimated through a stator flux linkage observer and the torque error is compensated in real time using the estimated PM flux linkage. In this paper, the proposed torque error compensation algorithm is verified in simulation and experiment.
Frequency selective surfaces integrated with phased array antennas
Monni, S.
2005-01-01
Frequency Selective Surfaces (FSS's) are periodic arrays of patches and/or slots etched on a metal plate, having frequency and angular ??ltering properties. The FSS response to an excitation (for example a plane wave) is characterized in terms of its re ection and transmission coe??cient, and
Integrating Surface Modeling into the Engineering Design Graphics Curriculum
Hartman, Nathan W.
2006-01-01
It has been suggested there is a knowledge base that surrounds the use of 3D modeling within the engineering design process and correspondingly within engineering design graphics education. While solid modeling receives a great deal of attention and discussion relative to curriculum efforts, and rightly so, surface modeling is an equally viable 3D…
Optimization of surface integrity in dry hard turning using RSM
Indian Academy of Sciences (India)
This paper investigates the effect of different cutting parameters (cutting ... with coated carbide tool under different settings of cutting parameters. ... procedure of response surface methodology (RSM) to determine optimal ..... The numerical opti- .... and analysis of experiments, New Delhi, A. K. Ghosh, PHI Learning Private.
DEFF Research Database (Denmark)
Baira Ojeda, Ismael; Tolu, Silvia; Lund, Henrik Hautop
2017-01-01
Combining Fable robot, a modular robot, with a neuroinspired controller, we present the proof of principle of a system that can scale to several neurally controlled compliant modules. The motor control and learning of a robot module are carried out by a Unit Learning Machine (ULM) that embeds...... the Locally Weighted Projection Regression algorithm (LWPR) and a spiking cerebellar-like microcircuit. The LWPR guarantees both an optimized representation of the input space and the learning of the dynamic internal model (IM) of the robot. However, the cerebellar-like sub-circuit integrates LWPR input...
Seltzer, S. M.
1976-01-01
The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.
Directory of Open Access Journals (Sweden)
Ambarish Panda
2016-09-01
Full Text Available A new evolutionary hybrid algorithm (HA has been proposed in this work for environmental optimal power flow (EOPF problem. The EOPF problem has been formulated in a nonlinear constrained multi objective optimization framework. Considering the intermittency of available wind power a cost model of the wind and thermal generation system is developed. Suitably formed objective function considering the operational cost, cost of emission, real power loss and cost of installation of FACTS devices for maintaining a stable voltage in the system has been optimized with HA and compared with particle swarm optimization algorithm (PSOA to prove its effectiveness. All the simulations are carried out in MATLAB/SIMULINK environment taking IEEE30 bus as the test system.
Eppler, D. B.
2015-01-01
Lunar surface geological exploration should be founded on a number of key elements that are seemingly disparate, but which can form an integrated operational concept when properly conceived and deployed. If lunar surface geological exploration is to be useful, this integration of key elements needs to be undertaken throughout the development of both mission hardware, training and operational concepts. These elements include the concept of mission class, crew makeup and training, surface mobility assets that are matched with mission class, and field tools and IT assets that make data collection, sharing and archiving transparent to the surface crew.
Impacts of model initialization on an integrated surface water - groundwater model
Ajami, Hoori; McCabe, Matthew; Evans, Jason P.
2015-01-01
Integrated hydrologic models characterize catchment responses by coupling the subsurface flow with land surface processes. One of the major areas of uncertainty in such models is the specification of the initial condition and its influence
On the initial condition problem of the time domain PMCHWT surface integral equation
Uysal, Ismail Enes; Bagci, Hakan; Ergin, A. Arif; Ulku, H. Arda
2017-01-01
Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced
CSIR Research Space (South Africa)
Matthews, MW
2012-09-01
Full Text Available A novel algorithm is presented for detecting trophic status (chlorophyll-a), cyanobacterial blooms (cyano-blooms), surface scum and floating vegetation in coastal and inland waters using top-ofatmosphere data from the Medium Resolution Imaging...
Integrating remotely sensed surface water extent into continental scale hydrology.
Revilla-Romero, Beatriz; Wanders, Niko; Burek, Peter; Salamon, Peter; de Roo, Ad
2016-12-01
In hydrological forecasting, data assimilation techniques are employed to improve estimates of initial conditions to update incorrect model states with observational data. However, the limited availability of continuous and up-to-date ground streamflow data is one of the main constraints for large-scale flood forecasting models. This is the first study that assess the impact of assimilating daily remotely sensed surface water extent at a 0.1° × 0.1° spatial resolution derived from the Global Flood Detection System (GFDS) into a global rainfall-runoff including large ungauged areas at the continental spatial scale in Africa and South America. Surface water extent is observed using a range of passive microwave remote sensors. The methodology uses the brightness temperature as water bodies have a lower emissivity. In a time series, the satellite signal is expected to vary with changes in water surface, and anomalies can be correlated with flood events. The Ensemble Kalman Filter (EnKF) is a Monte-Carlo implementation of data assimilation and used here by applying random sampling perturbations to the precipitation inputs to account for uncertainty obtaining ensemble streamflow simulations from the LISFLOOD model. Results of the updated streamflow simulation are compared to baseline simulations, without assimilation of the satellite-derived surface water extent. Validation is done in over 100 in situ river gauges using daily streamflow observations in the African and South American continent over a one year period. Some of the more commonly used metrics in hydrology were calculated: KGE', NSE, PBIAS%, R 2 , RMSE, and VE. Results show that, for example, NSE score improved on 61 out of 101 stations obtaining significant improvements in both the timing and volume of the flow peaks. Whereas the validation at gauges located in lowland jungle obtained poorest performance mainly due to the closed forest influence on the satellite signal retrieval. The conclusion is that
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Directory of Open Access Journals (Sweden)
Xin Li
2016-02-01
Full Text Available Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning, which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF.
Energy Technology Data Exchange (ETDEWEB)
Parsakhoo, P.
2016-07-01
Aim of study: Corrected Backmund and Surface Distribution Algorithms (SDA) for analysis of forest road network are introduced and presented in this study. Research was carried out to compare road network performance between two districts in a hardwood forest. Area of study: Shast Kalateh forests, Iran. Materials and methods: In uncorrected Backmund algorithm, skidding distance was determined by calculating road density and spacing and then it was designed as Potential Area for Skidding Operations (PASO) in ArcGIS software. To correct this procedure, the skidding constraint areas were taken using GPS and then removed from PASO. In SDA, shortest perpendicular distance from geometrical center of timber compartments to road was measured at both districts. Main results: In corrected Backmund, forest openness in district I and II were 70.3% and 69.5%, respectively. Therefore, there was little difference in forest openness in the districts based on the uncorrected Backmund. In SDA, the mean distance from geometrical center of timber compartments to the roads of districts I and II were 199.45 and 149.31 meters, respectively. Forest road network distribution in district II was better than that of district I relating to SDA. Research highlights: It was concluded that uncorrected Backmund was not precise enough to assess forest road network, while corrected Backmund could exhibit a real PASO by removing skidding constraints. According to presented algorithms, forest road network performance in district II was better than district I. (Author)
Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang
2017-10-21
The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ-function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.
Directory of Open Access Journals (Sweden)
Yu-Ze Zhang
2017-01-01
Full Text Available The Cross-track Infrared Sounder (CrIS is one of the most advanced hyperspectral instruments and has been used for various atmospheric applications such as atmospheric retrievals and weather forecast modeling. However, because of the specific design purpose of CrIS, little attention has been paid to retrieving land surface parameters from CrIS data. To take full advantage of the rich spectral information in CrIS data to improve the land surface retrievals, particularly the acquisition of a continuous Land Surface Emissivity (LSE spectrum, this paper attempts to simultaneously retrieve a continuous LSE spectrum and the Land Surface Temperature (LST from CrIS data with the atmospheric reanalysis data and the Iterative Spectrally Smooth Temperature and Emissivity Separation (ISSTES algorithm. The results show that the accuracy of the retrieved LSEs and LST is comparable with the current land products. The overall differences of the LST and LSE retrievals are approximately 1.3 K and 1.48%, respectively. However, the LSEs in our study can be provided as a continuum spectrum instead of the single-channel values in traditional products. The retrieved LST and LSEs now can be better used to further analyze the surface properties or improve the retrieval of atmospheric parameters.
International Nuclear Information System (INIS)
Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu; Yeates, Anthony R.
2010-01-01
The emergence of tilted bipolar active regions (ARs) and the dispersal of their flux, mediated via processes such as diffusion, differential rotation, and meridional circulation, is believed to be responsible for the reversal of the Sun's polar field. This process (commonly known as the Babcock-Leighton mechanism) is usually modeled as a near-surface, spatially distributed α-effect in kinematic mean-field dynamo models. However, this formulation leads to a relationship between polar field strength and meridional flow speed which is opposite to that suggested by physical insight and predicted by surface flux-transport simulations. With this in mind, we present an improved double-ring algorithm for modeling the Babcock-Leighton mechanism based on AR eruption, within the framework of an axisymmetric dynamo model. Using surface flux-transport simulations, we first show that an axisymmetric formulation-which is usually invoked in kinematic dynamo models-can reasonably approximate the surface flux dynamics. Finally, we demonstrate that our treatment of the Babcock-Leighton mechanism through double-ring eruption leads to an inverse relationship between polar field strength and meridional flow speed as expected, reconciling the discrepancy between surface flux-transport simulations and kinematic dynamo models.
Integrating Surface Water Management in Urban and Regional Planning, Case Study of Wuhan in China
Du, N.
2010-01-01
The main goal of the study is to examine and develop a spatial planning methodology that would enhance the sustainability of urban development by integrating the surface water system in the urban and regional planning process. Theoretically, this study proposes that proactive-integrated policy and
DEFF Research Database (Denmark)
Zhdanov, Michael; Cai, Hongzhu
2014-01-01
We introduce a new method of modeling and inversion of potential field data generated by a density contrast surface. Our method is based on 3D Cauchy-type integral representation of the potential fields. Traditionally, potential fields are calculated using volume integrals of the domains occupied...
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm
International Nuclear Information System (INIS)
Kamleh, Waseem
2011-01-01
Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Directory of Open Access Journals (Sweden)
S. B. Mansor
2012-08-01
Full Text Available In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.
Unscented Kalman Filter Algorithm for WiFi-PDR Integrated Indoor Positioning
Directory of Open Access Journals (Sweden)
CHEN GuoLiang
2015-12-01
Full Text Available Indoor positioning still faces lots of fundamental technical problems although it has been widely applied. A novel indoor positioning technology by using the smart phone with the assisting of the widely available and economically signals of WiFi is proposed. It also includes the principles and characteristics in indoor positioning. Firstly, improve the system's accuracy by fusing the WiFi fingerprinting positioning and PDR (ped estrian dead reckoning positioning with UKF (unscented Kalman filter. Secondly, improve the real-time performance by clustering the WiFi fingerprinting with k-means clustering algorithm. An investigation test was conducted at the indoor environment to learn about its performance on a HUAWEI P6-U06 smart phone. The result shows that compared to the pattern-matching system without clustering, an average reduction of 51% in the time cost can be obtained without degrading the positioning accuracy. When the state of personnel is walking, the average positioning error of WiFi is 7.76 m, the average positioning error of PDR is 4.57 m. After UKF fusing, the system's average positioning error is down to 1.24 m. It shows that the algorithm greatly improves the system's real-time and positioning accuracy.
Korkin, S.; Lyapustin, A.
2012-12-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD
Effect of surface integrity of hard turned AISI 52100 steel on fatigue performance
International Nuclear Information System (INIS)
Smith, Stephen; Melkote, Shreyes N.; Lara-Curzio, Edgar; Watkins, Thomas R.; Allard, Larry; Riester, Laura
2007-01-01
This paper addresses the relationship between surface integrity and fatigue life of hard turned AISI 52100 steel (60-62 HRC), with grinding as a benchmark. The impact of superfinishing on the fatigue performance of hard turned and ground surfaces is also discussed. Specifically, the surface integrity and fatigue life of the following five distinct surface conditions are examined: hard turned with continuous white layer, hard turned with no white layer, ground, and superfinished hard turned and ground specimens. Surface integrity of the specimens is characterized via surface topography measurement, metallography, residual stress measurements, transmission electron microscopy (TEM), and nano-indentation tests. High cycle tension-tension fatigue tests show that the presence of white layer does not adversely affect fatigue life and that, on average, the hard turned surface performs as well or better than the ground surface. The effect of superfinishing is to exaggerate these differences in performance. The results obtained from this study suggest that the effect of residual stress on fatigue life is more significant than the effect of white layer. For the hard turned surfaces, the fatigue life is found to be directly proportional to both the surface compressive residual stress and the maximum compressive residual stress. Possible explanations for the observed effects are discussed
Smith, W. L., Jr.; Minnis, P.; Bedka, K. M.; Sun-Mack, S.; Chen, Y.; Doelling, D. R.; Kato, S.; Rutan, D. A.
2017-12-01
Recent studies analyzing long-term measurements of surface insolation at ground sites suggest that decadal-scale trends of increasing (brightening) and decreasing (dimming) downward solar flux have occurred at various times over the last century. Regional variations have been reported that range from near 0 Wm-2/decade to as large as 9 Wm-2/decade depending on the location and time period analyzed. The more significant trends have been attributed to changes in overhead clouds and aerosols, although quantifying their relative impacts using independent observations has been difficult, owing in part to a lack of consistent long-term measurements of cloud properties. This paper examines new satellite based records of cloud properties derived from MODIS (2000-present) and AVHRR (1981- present) data to infer cloud property trends over a number of surface radiation sites across the globe. The MODIS cloud algorithm was developed for the NASA Clouds and the Earth's Radiant Energy System (CERES) project to provide a consistent record of cloud properties to help improve broadband radiation measurements and to better understand cloud radiative effects. The CERES-MODIS cloud algorithm has been modified to analyze other satellites including the AVHRR on the NOAA satellites. Compared to MODIS, obtaining consistent cloud properties over a long period from AVHRR is a much more significant challenge owing to the number of different satellites, instrument calibration uncertainties, orbital drift and other factors. Nevertheless, both the MODIS and AVHRR cloud properties will be analyzed to determine trends, and their level of consistency and correspondence with surface radiation trends derived from the ground-based radiometer data. It is anticipated that this initial study will contribute to an improved understanding of surface solar radiation trends and their relationship to clouds.
Maria L. Sonett
1999-01-01
Integrated surface management techniques for pipeline construction through arid and semi-arid rangeland ecosystems are presented in a case history of a 412-mile pipeline construction project in New Mexico. Planning, implementation and monitoring for restoration of surface hydrology, soil stabilization, soil cover, and plant species succession are discussed. Planning...
Investigation of Selected Surface Integrity Features of Duplex Stainless Steel (DSS) after Turning
Czech Academy of Sciences Publication Activity Database
Krolczyk, G.; Nieslony, P.; Legutko, S.; Hloch, Sergej; Samardžić, I.
2015-01-01
Roč. 54, č. 1 (2015), s. 91-94 ISSN 0543-5846 Institutional support: RVO:68145535 Keywords : duplex stainless steel * machining * turning * surface integrity * surface roughness Subject RIV: JQ - Machines ; Tools Impact factor: 0.959, year: 2014 http://hrcak.srce.hr/126702
Miyajima, Hiroyuki; Ozer, Fusun; Imazato, Satoshi; Mante, Francis K
2017-09-01
Artificial hip joints are generally expected to fail due to wear after approximately 15years and then have to be replaced by revision surgery. If articular cartilage can be integrated onto the articular surfaces of artificial joints in the same way as osseo-integration of titanium dental implants, the wear of joint implants may be reduced or prevented. However, very few studies have focused on the relationship between Ti surface and cartilage. To explore the possibility of cartilaginous-integration, we fabricated chemically treated Ti surfaces with H 2 O 2 /HCl, collagen type II and SBF, respectively. Then, we evaluated surface characteristics of the prepared Ti samples and assessed the cartilage formation by culturing chondrocytes on the Ti samples. When oxidized Ti was immersed in SBF for 7days, apatite was formed on the Ti surface. The surface characteristics of Ti indicated that the wettability was increased by all chemical treatments compared to untreated Ti, and that H 2 O 2 /HCl treated surface had significantly higher roughness compared to the other three groups. Chondrocytes produced significantly more cartilage matrix on all chemically treated Ti surfaces compared to untreated Ti. Thus, to realize cartilaginous-integration and to prevent wear of the implants in joints, application of bioactive Ti formed by chemical treatment would be a promising and effective strategy to improve durability of joint replacement. Copyright © 2017 Elsevier B.V. All rights reserved.
Integral methods for shallow free-surface flows with separation
DEFF Research Database (Denmark)
Watanabe, S.; Putkaradze, V.; Bohr, Tomas
2003-01-01
eddy and separated flow. Assuming a variable radial velocity profile as in Karman-Pohlhausen's method, we obtain a system of two ordinary differential equations for stationary states that can smoothly go through the jump. Solutions of the system are in good agreement with experiments. For the flow down...... an inclined plane we take a similar approach and derive a simple model in which the velocity profile is not restricted to a parabolic or self-similar form. Two types of solutions with large surface distortions are found: solitary, kink-like propagating fronts, obtained when the flow rate is suddenly changed......, and stationary jumps, obtained, for instance, behind a sluice gate. We then include time dependence in the model to study the stability of these waves. This allows us to distinguish between sub- and supercritical flows by calculating dispersion relations for wavelengths of the order of the width of the layer....
Establishment of integrated information displays in aluminium surfaces using nanomanufacturing
DEFF Research Database (Denmark)
Prichystal, Jan; Hansen, Hans Nørgaard; Bladt, Henrik H.
2006-01-01
Bang & Olufsen has been working with a method for manufacturing ultra-thin structures in aluminium that can be penetrated by light. This work has resulted in a patent describing how to obtain this effect by material removal in local areas in a solid material. The idea behind an invisible display...... in aluminium concerns the processing of a metal workpiece in such a way that microcavities are formed from the backside of the workpiece. The microcavities must not penetrate the metal front side, but an ultra-thin layer of metal is left. It is possible to shine light through this layer. By ordering...... microcavities in a matrix, different symbols can be obtained by shining light from the backside of the workpiece. When there is no light from the backside, the front surface seems totally untouched. Three different manufacturing processes were investigated to achieve the desired functionality: laser...
Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel
2018-04-20
In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF0.7), however, all methods showed larger RMSD differences (biases) ranging between 32% and 80.6% (-54.5%-8.7%). Finally, three methods tested for low sun elevations revealed
Directory of Open Access Journals (Sweden)
Jianhua Wang
2014-10-01
Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great
An Algorithm for Surface Current Retrieval from X-band Marine Radar Images
Directory of Open Access Journals (Sweden)
Chengxi Shen
2015-06-01
Full Text Available In this paper, a novel current inversion algorithm from X-band marine radar images is proposed. The routine, for which deep water is assumed, begins with 3-D FFT of the radar image sequence, followed by the extraction of the dispersion shell from the 3-D image spectrum. Next, the dispersion shell is converted to a polar current shell (PCS using a polar coordinate transformation. After removing outliers along each radial direction of the PCS, a robust sinusoidal curve fitting is applied to the data points along each circumferential direction of the PCS. The angle corresponding to the maximum of the estimated sinusoid function is determined to be the current direction, and the amplitude of this sinusoidal function is the current speed. For validation, the algorithm is tested against both simulated radar images and field data collected by a vertically-polarized X-band system and ground-truthed with measurements from an acoustic Doppler current profiler (ADCP. From the field data, it is observed that when the current speed is less than 0.5 m/s, the root mean square differences between the radar-derived and the ADCP-measured current speed and direction are 7.3 cm/s and 32.7°, respectively. The results indicate that the proposed procedure, unlike most existing current inversion schemes, is not susceptible to high current speeds and circumvents the need to consider aliasing. Meanwhile, the relatively low computational cost makes it an excellent choice in practical marine applications.
Directory of Open Access Journals (Sweden)
Tine L. Vandoorn
2015-06-01
Full Text Available The increasing share of distributed energy resources poses a challenge to the distribution network operator (DNO to maintain the current availability of the system while limiting the investment costs. Related to this, there is a clear trend in DNOs trying to better monitor their grid by installing a distribution management system (DMS. This DMS enables the DNOs to remotely switch their network or better localize and solve faults. Moreover, the DMS can be used to centrally control the grid assets. Therefore, in this paper, a control strategy is discussed that can be implemented in the DMS for solving current congestion problems posed by the increasing share of renewables in the grid. This control strategy controls wind turbines in order to avoid congestion while mitigating the required investment costs in order to achieve a global cost-efficient solution. Next to the application and objective of the control, the parameter tuning of the control algorithm is discussed.
On the initial condition problem of the time domain PMCHWT surface integral equation
Uysal, Ismail Enes
2017-05-13
Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced properly. This problem can be remedied by solving the time integral of the surface integral for auxiliary currents that are defined to be the time derivatives of the equivalent currents. Then the equivalent currents are obtained by numerically differentiating the auxiliary ones. In this work, this approach is applied to the marching on-in-time solution of the time domain Poggio-Miller-Chan-Harrington-Wu-Tsai surface integral equation enforced on dispersive/plasmonic scatterers. Accuracy of the proposed method is demonstrated by a numerical example.
Framework for Integrating Science Data Processing Algorithms Into Process Control Systems
Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.
2011-01-01
A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.
Directory of Open Access Journals (Sweden)
John Patrick Mpindi
Full Text Available BACKGROUND: Meta-analysis of gene expression microarray datasets presents significant challenges for statistical analysis. We developed and validated a new bioinformatic method for the identification of genes upregulated in subsets of samples of a given tumour type ('outlier genes', a hallmark of potential oncogenes. METHODOLOGY: A new statistical method (the gene tissue index, GTI was developed by modifying and adapting algorithms originally developed for statistical problems in economics. We compared the potential of the GTI to detect outlier genes in meta-datasets with four previously defined statistical methods, COPA, the OS statistic, the t-test and ORT, using simulated data. We demonstrated that the GTI performed equally well to existing methods in a single study simulation. Next, we evaluated the performance of the GTI in the analysis of combined Affymetrix gene expression data from several published studies covering 392 normal samples of tissue from the central nervous system, 74 astrocytomas, and 353 glioblastomas. According to the results, the GTI was better able than most of the previous methods to identify known oncogenic outlier genes. In addition, the GTI identified 29 novel outlier genes in glioblastomas, including TYMS and CDKN2A. The over-expression of these genes was validated in vivo by immunohistochemical staining data from clinical glioblastoma samples. Immunohistochemical data were available for 65% (19 of 29 of these genes, and 17 of these 19 genes (90% showed a typical outlier staining pattern. Furthermore, raltitrexed, a specific inhibitor of TYMS used in the therapy of tumour types other than glioblastoma, also effectively blocked cell proliferation in glioblastoma cell lines, thus highlighting this outlier gene candidate as a potential therapeutic target. CONCLUSIONS/SIGNIFICANCE: Taken together, these results support the GTI as a novel approach to identify potential oncogene outliers and drug targets. The algorithm is
Two-sheet surface rebinning algorithm for real time cone beam tomography
Energy Technology Data Exchange (ETDEWEB)
Betcke, Marta M. [University College London (United Kingdom). Dept. of Computer Science; Lionheart, William R.B. [Manchester Univ. (United Kingdom). School of Mathematics
2011-07-01
The Rapiscan RTT80 is an example of a fast cone beam CT scanner in which the X-ray sources are fixed on a circle while the detector rows are offset axially on one side of the sources. Reconstruction for this offset truncation presents a new challenge and we propose a method using rebinning to an optimal two-sheet surface. (orig.)
Directory of Open Access Journals (Sweden)
Z. Q. Peng
2016-11-01
Full Text Available Evapotranspiration (ET plays an important role in surface–atmosphere interactions and can be monitored using remote sensing data. However, surface heterogeneity, including the inhomogeneity of landscapes and surface variables, significantly affects the accuracy of ET estimated from satellite data. The objective of this study is to assess and reduce the uncertainties resulting from surface heterogeneity in remotely sensed ET using Chinese HJ-1B satellite data, which is of 30 m spatial resolution in VIS/NIR bands and 300 m spatial resolution in the thermal-infrared (TIR band. A temperature-sharpening and flux aggregation scheme (TSFA was developed to obtain accurate heat fluxes from the HJ-1B satellite data. The IPUS (input parameter upscaling and TRFA (temperature resampling and flux aggregation methods were used to compare with the TSFA in this study. The three methods represent three typical schemes used to handle mixed pixels from the simplest to the most complex. IPUS handles all surface variables at coarse resolution of 300 m in this study, TSFA handles them at 30 m resolution, and TRFA handles them at 30 and 300 m resolution, which depends on the actual spatial resolution. Analyzing and comparing the three methods can help us to get a better understanding of spatial-scale errors in remote sensing of surface heat fluxes. In situ data collected during HiWATER-MUSOEXE (Multi-Scale Observation Experiment on Evapotranspiration over heterogeneous land surfaces of the Heihe Watershed Allied Telemetry Experimental Research were used to validate and analyze the methods. ET estimated by TSFA exhibited the best agreement with in situ observations, and the footprint validation results showed that the R2, MBE, and RMSE values of the sensible heat flux (H were 0.61, 0.90, and 50.99 W m−2, respectively, and those for the latent heat flux (LE were 0.82, −20.54, and 71.24 W m−2, respectively. IPUS yielded the largest errors
Directory of Open Access Journals (Sweden)
Xiao Ling
2016-08-01
Full Text Available This paper presents a novel image matching method for multi-source satellite images, which integrates global Shuttle Radar Topography Mission (SRTM data and image segmentation to achieve robust and numerous correspondences. This method first generates the epipolar lines as a geometric constraint assisted by global SRTM data, after which the seed points are selected and matched. To produce more reliable matching results, a region segmentation-based matching propagation is proposed in this paper, whereby the region segmentations are extracted by image segmentation and are considered to be a spatial constraint. Moreover, a similarity measure integrating Distance, Angle and Normalized Cross-Correlation (DANCC, which considers geometric similarity and radiometric similarity, is introduced to find the optimal correspondences. Experiments using typical satellite images acquired from Resources Satellite-3 (ZY-3, Mapping Satellite-1, SPOT-5 and Google Earth demonstrated that the proposed method is able to produce reliable and accurate matching results.
Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam
2018-03-11
To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi
Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas
2017-06-15
Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the
Directory of Open Access Journals (Sweden)
Christian Lester D. Gimeno
2017-11-01
Full Text Available –This research study focused on the development of a software that helps students design, write, validate and run their pseudocode in a semi Integrated Development Environment (IDE instead of manually writing it on a piece of paper.Specifically, the study aimed to develop lexical analyzer or lexer, syntax analyzer or parser using recursive descent parsing algorithm and an interpreter. The lexical analyzer reads pseudocodesource in a sequence of symbols or characters as lexemes.The lexemes are then analyzed by the lexer that matches a pattern for valid tokens and passes to the syntax analyzer or parser. The syntax analyzer or parser takes those valid tokens and builds meaningful commands using recursive descent parsing algorithm in a form of an abstract syntax tree. The generation of an abstract syntax tree is based on the specified grammar rule created by the researcher expressed in Extended Backus-Naur Form. The Interpreter takes the generated abstract syntax tree and starts the evaluation or interpretation to produce pseudocode output. The software was evaluated using white-box testing by several ICT professionals and black-box testing by several computer science students based on the International Organization for Standardization (ISO 9126 software quality standards. The overall results of the evaluation both for white-box and black-box were described as “Excellent in terms of functionality, reliability, usability, efficiency, maintainability and portability”.
Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung
2017-05-19
Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Directory of Open Access Journals (Sweden)
Bangzhu Zhu
2012-02-01
Full Text Available Due to the movement and complexity of the carbon market, traditional monoscale forecasting approaches often fail to capture its nonstationary and nonlinear properties and accurately describe its moving tendencies. In this study, a multiscale ensemble forecasting model integrating empirical mode decomposition (EMD, genetic algorithm (GA and artificial neural network (ANN is proposed to forecast carbon price. Firstly, the proposed model uses EMD to decompose carbon price data into several intrinsic mode functions (IMFs and one residue. Then, the IMFs and residue are composed into a high frequency component, a low frequency component and a trend component which have similar frequency characteristics, simple components and strong regularity using the fine-to-coarse reconstruction algorithm. Finally, those three components are predicted using an ANN trained by GA, i.e., a GAANN model, and the final forecasting results can be obtained by the sum of these three forecasting results. For verification and testing, two main carbon future prices with different maturity in the European Climate Exchange (ECX are used to test the effectiveness of the proposed multiscale ensemble forecasting model. Empirical results obtained demonstrate that the proposed multiscale ensemble forecasting model can outperform the single random walk (RW, ARIMA, ANN and GAANN models without EMD preprocessing and the ensemble ARIMA model with EMD preprocessing.
International Nuclear Information System (INIS)
Franke, B.C.; Kensek, R.P.; Prinja, A.K.
2013-01-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative 'condensed transport' formulation, a Generalized Boltzmann-Fokker-Planck (GBFP) method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations. (authors)
Remote sensing algorithm for sea surface CO2 in the Baltic Sea
Parard, G.; Charantonis, A. A.; Rutgerson, A.
2014-08-01
Studies of coastal seas in Europe have brought forth the high variability in the CO2 system. This high variability, generated by the complex mechanisms driving the CO2 fluxes makes their accurate estimation an arduous task. This is more pronounced in the Baltic Sea, where the mechanisms driving the fluxes have not been as highly detailed as in the open oceans. In adition, the joint availability of in-situ measurements of CO2 and of sea-surface satellite data is limited in the area. In this paper, a combination of two existing methods (Self-Organizing-Maps and Multiple Linear regression) is used to estimate ocean surface pCO2 in the Baltic Sea from remotely sensed surface temperature, chlorophyll, coloured dissolved organic matter, net primary production and mixed layer depth. The outputs of this research have an horizontal resolution of 4 km, and cover the period from 1998 to 2011. The reconstructed pCO2 values over the validation data set have a correlation of 0.93 with the in-situ measurements, and a root mean square error is of 38 μatm. The removal of any of the satellite parameters degraded this reconstruction of the CO2 flux, and we chose therefore to complete any missing data through statistical imputation. The CO2 maps produced by this method also provide a confidence level of the reconstruction at each grid point. The results obtained are encouraging given the sparsity of available data and we expect to be able to produce even more accurate reconstructions in the coming years, in view of the predicted acquisitions of new data.
Directory of Open Access Journals (Sweden)
Stefano Bernardinetti
2017-06-01
Full Text Available The need to obtain a detailed hydrogeological characterization of the subsurface and its interpretation for the groundwater resources management, often requires to apply several and complementary geophysical methods. The goal of the approach in this paper is to provide a unique model of the aquifer by synthesizing and optimizing the information provided by several geophysical methods. This approach greatly reduces the degree of uncertainty and subjectivity of the interpretation by exploiting the different physical and mechanic characteristics of the aquifer. The studied area, into the municipality of Laterina (Arezzo, Italy, is a shallow basin filled by lacustrine and alluvial deposits (Pleistocene and Olocene epochs, Quaternary period, with alternated silt, sand with variable content of gravel and clay where the bottom is represented by arenaceous-pelitic rocks (Mt. Cervarola Unit, Tuscan Domain, Miocene epoch. This shallow basin constitutes the unconfined superficial aquifer to be exploited in the nearly future. To improve the geological model obtained from a detailed geological survey we performed electrical resistivity and P wave refraction tomographies along the same line in order to obtain different, independent and integrable data sets. For the seismic data also the reflected events have been processed, a remarkable contribution to draw the geologic setting. Through the k-means algorithm, we perform a cluster analysis for the bivariate data set to individuate relationships between the two sets of variables. This algorithm allows to individuate clusters with the aim of minimizing the dissimilarity within each cluster and maximizing it among different clusters of the bivariate data set. The optimal number of clusters “K”, corresponding to the individuated geophysical facies, depends to the multivariate data set distribution and in this work is estimated with the Silhouettes. The result is an integrated tomography that shows a finite
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
Feskov, Serguei V.; Ivanov, Anatoly I.
2018-03-01
An approach to the construction of diabatic free energy surfaces (FESs) for ultrafast electron transfer (ET) in a supramolecule with an arbitrary number of electron localization centers (redox sites) is developed, supposing that the reorganization energies for the charge transfers and shifts between all these centers are known. Dimensionality of the coordinate space required for the description of multistage ET in this supramolecular system is shown to be equal to N - 1, where N is the number of the molecular centers involved in the reaction. The proposed algorithm of FES construction employs metric properties of the coordinate space, namely, relation between the solvent reorganization energy and the distance between the two FES minima. In this space, the ET reaction coordinate zn n' associated with electron transfer between the nth and n'th centers is calculated through the projection to the direction, connecting the FES minima. The energy-gap reaction coordinates zn n' corresponding to different ET processes are not in general orthogonal so that ET between two molecular centers can create nonequilibrium distribution, not only along its own reaction coordinate but along other reaction coordinates too. This results in the influence of the preceding ET steps on the kinetics of the ensuing ET. It is important for the ensuing reaction to be ultrafast to proceed in parallel with relaxation along the ET reaction coordinates. Efficient algorithms for numerical simulation of multistage ET within the stochastic point-transition model are developed. The algorithms are based on the Brownian simulation technique with the recrossing-event detection procedure. The main advantages of the numerical method are (i) its computational complexity is linear with respect to the number of electronic states involved and (ii) calculations can be naturally parallelized up to the level of individual trajectories. The efficiency of the proposed approach is demonstrated for a model
Investigation of selected surface integrity features of duplex stainless steel (DSS after turning
Directory of Open Access Journals (Sweden)
G. Krolczyk
2015-01-01
Full Text Available The article presents surface roughness profiles and Abbott - Firestone curves with vertical and amplitude parameters of surface roughness after turning by means of a coated sintered carbide wedge with a coating with ceramic intermediate layer. The investigation comprised the influence of cutting speed on the selected features of surface integrity in dry machining. The material under investigation was duplex stainless steel with two-phase ferritic-austenitic structure. The tests have been performed under production conditions during machining of parts for electric motors and deep-well pumps. The obtained results allow to draw conclusions about the characteristics of surface properties of the machined parts.
Onana, Vincent De Paul; Koenig, Lora Suzanne; Ruth, Julia; Studinger, Michael; Harbeck, Jeremy P.
2014-01-01
Snow accumulation over an ice sheet is the sole mass input, making it a primary measurement for understanding the past, present, and future mass balance. Near-surface frequency-modulated continuous-wave (FMCW) radars image isochronous firn layers recording accumulation histories. The Semiautomated Multilayer Picking Algorithm (SAMPA) was designed and developed to trace annual accumulation layers in polar firn from both airborne and ground-based radars. The SAMPA algorithm is based on the Radon transform (RT) computed by blocks and angular orientations over a radar echogram. For each echogram's block, the RT maps firn segmented-layer features into peaks, which are picked using amplitude and width threshold parameters of peaks. A backward RT is then computed for each corresponding block, mapping the peaks back into picked segmented-layers. The segmented layers are then connected and smoothed to achieve a final layer pick across the echogram. Once input parameters are trained, SAMPA operates autonomously and can process hundreds of kilometers of radar data picking more than 40 layers. SAMPA final pick results and layer numbering still require a cursory manual adjustment to correct noncontinuous picks, which are likely not annual, and to correct for inconsistency in layer numbering. Despite the manual effort to train and check SAMPA results, it is an efficient tool for picking multiple accumulation layers in polar firn, reducing time over manual digitizing efforts. The trackability of good detected layers is greater than 90%.
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.
Integrated nanohole array surface plasmon resonance sensing device using a dual-wavelength source
International Nuclear Information System (INIS)
Escobedo, C; Vincent, S; Choudhury, A I K; Campbell, J; Gordon, R; Brolo, A G; Sinton, D
2011-01-01
In this paper, we demonstrate a compact integrated nanohole array-based surface plasmon resonance sensing device. The unit includes a LED light source, driving circuitry, CCD detector, microfluidic network and computer interface, all assembled from readily available commercial components. A dual-wavelength LED scheme was implemented to increase spectral diversity and isolate intensity variations to be expected in the field. The prototype shows bulk sensitivity of 266 pixel intensity units/RIU and a limit of detection of 6 × 10 −4 RIU. Surface binding tests were performed, demonstrating functionality as a surface-based sensing system. This work is particularly relevant for low-cost point-of-care applications, especially those involving multiple tests and field studies. While nanohole arrays have been applied to many sensing applications, and their suitability to device integration is well established, this is the first demonstration of a fully integrated nanohole array-based sensing device.
Accelerated sampling by infinite swapping of path integral molecular dynamics with surface hopping
Lu, Jianfeng; Zhou, Zhennan
2018-02-01
To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.
Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng
2018-04-01
This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.
Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.
2018-03-01
Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.
Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao
2015-09-23
Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone's acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.
Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization
Directory of Open Access Journals (Sweden)
Guoliang Chen
2015-09-01
Full Text Available Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.
Directory of Open Access Journals (Sweden)
A. Becker
2003-01-01
Full Text Available In this paper a hybrid method combining the FDTD/FIT with a Time Domain Boundary-Integral Marching-on-in-Time Algorithm (TD-BIM is presented. Inhomogeneous regions are modelled with the FIT-method, an alternative formulation of the FDTD. Homogeneous regions (which is in the presented numerical example the open space are modelled using a TD-BIM with equivalent electric and magnetic currents flowing on the boundary between the inhomogeneous and the homogeneous regions. The regions are coupled by the tangential magnetic fields just outside the inhomogeneous regions. These fields are calculated by making use of a Mixed Potential Integral Formulation for the magnetic field. The latter consists of equivalent electric and magnetic currents on the boundary plane between the homogeneous and the inhomogeneous region. The magnetic currents result directly from the electric fields of the Yee lattice. Electric currents in the same plane are calculated by making use of the TD-BIM and using the electric field of the Yee lattice as boundary condition. The presented hybrid method only needs the interpolations inherent in FIT and no additional interpolation. A numerical result is compared to a calculation that models both regions with FDTD.
Eco-hydrological process simulations within an integrated surface water-groundwater model
DEFF Research Database (Denmark)
Butts, Michael; Loinaz, Maria Christina; Bauer-Gottwein, Peter
2014-01-01
Integrated water resources management requires tools that can quantify changes in groundwater, surface water, water quality and ecosystem health, as a result of changes in catchment management. To address these requirements we have developed an integrated eco-hydrological modelling framework...... that allows hydrologists and ecologists to represent the complex and dynamic interactions occurring between surface water, ground water, water quality and freshwater ecosystems within a catchment. We demonstrate here the practical application of this tool to two case studies where the interaction of surface...... water and ground water are important for the ecosystem. In the first, simulations are performed to understand the importance of surface water-groundwater interactions for a restored riparian wetland on the Odense River in Denmark as part of a larger investigation of water quality and nitrate retention...
Numerical simulation of liquid film flow on revolution surfaces with momentum integral method
International Nuclear Information System (INIS)
Bottoni Maurizio
2005-01-01
The momentum integral method is applied in the frame of safety analysis of pressure water reactors under hypothetical loss of coolant accident (LOCA) conditions to simulate numerically film condensation, rewetting and vaporization on the inner surface of pressure water reactor containment. From the conservation equations of mass and momentum of a liquid film arising from condensation of steam upon the inner of the containment during a LOCA in a pressure water reactor plant, an integro-differential equation is derived, referring to an arbitrary axisymmetric surface of revolution. This equation describes the velocity distribution of the liquid film along a meridian of a surface of revolution. From the integro-differential equation and ordinary differential equation of first order for the film velocity is derived and integrated numerically. From the velocity distribution the film thickness distribution is obtained. The solution of the enthalpy equation for the liquid film yields the temperature distribution on the inner surface of the containment. (authors)
Li, Xiaofan; Nie, Qing
2009-07-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.
Directory of Open Access Journals (Sweden)
Andrés Iglesias
2018-03-01
Full Text Available This paper concerns several important topics of the Symmetry journal, namely, computer-aided design, computational geometry, computer graphics, visualization, and pattern recognition. We also take advantage of the symmetric structure of the tensor-product surfaces, where the parametric variables u and v play a symmetric role in shape reconstruction. In this paper we address the general problem of global-support parametric surface approximation from clouds of data points for reverse engineering applications. Given a set of measured data points, the approximation is formulated as a nonlinear continuous least-squares optimization problem. Then, a recent metaheuristics called Cuckoo Search Algorithm (CSA is applied to compute all relevant free variables of this minimization problem (namely, the data parameters and the surface poles. The method includes the iterative generation of new solutions by using the Lévy flights to promote the diversity of solutions and prevent stagnation. A critical advantage of this method is its simplicity: the CSA requires only two parameters, many fewer than any other metaheuristic approach, so the parameter tuning becomes a very easy task. The method is also simple to understand and easy to implement. Our approach has been applied to a benchmark of three illustrative sets of noisy data points corresponding to surfaces exhibiting several challenging features. Our experimental results show that the method performs very well even for the cases of noisy and unorganized data points. Therefore, the method can be directly used for real-world applications for reverse engineering without further pre/post-processing. Comparative work with the most classical mathematical techniques for this problem as well as a recent modification of the CSA called Improved CSA (ICSA is also reported. Two nonparametric statistical tests show that our method outperforms the classical mathematical techniques and provides equivalent results to ICSA
Zinno, Ivana; De Luca, Claudio; Elefante, Stefano; Imperatore, Pasquale; Manunta, Michele; Casu, Francesco
2014-05-01
been carried out on real data acquired by ENVISAT and COSMO-SkyMed sensors. Moreover, the P-SBAS performances with respect to the size of the input dataset will also be investigated. This kind of analysis is essential for assessing the goodness of the P-SBAS algorithm and gaining insight into its applicability to different scenarios. Besides, such results will also become crucial to identify and evaluate how to appropriately exploit P-SBAS to process the forthcoming large Sentinel-1 data stream. References [1] Massonnet, D., Briole, P., Arnaud, A., "Deflation of Mount Etna monitored by Spaceborne Radar Interferometry", Nature, vol. 375, pp. 567-570, 1995. [2] Berardino, P., G. Fornaro, R. Lanari, and E. Sansosti, "A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms", IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002. [3] Elefante, S., Imperatore, P. , Zinno, I., M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, F. Casu, "SBAS-DINSAR Time series generation on cloud computing platforms", IEEE IGARSS 2013, July 2013, Melbourne (AU). [4] Zinno, P. Imperatore, S. Elefante, F. Casu, M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, "A Novel Parallel Computational Framework for Processing Large INSAR Data Sets", Living Planet Symposium 2013, Sept. 9-13, 2013.
Directory of Open Access Journals (Sweden)
Hua-Ping YU
2014-07-01
Full Text Available Oil and gas pipelines are the infrastructure of national economic development. Deployment problem of wireless underground sensor networks (WUSN for oil and gas pipeline systems is a fundamental problem. This paper firstly analyzed the wireless channel characteristics and energy consumption model in near-surface underground soil, and then studied the spatial structure of oil and gas pipelines and introduced the three-layer system structure of WUSN for oil and gas pipelines monitoring. Secondly, the optimal deployment strategy in XY plane and XZ plane which were projected from three-dimensional oil and gas pipeline structure was analyzed. Thirdly, the technical framework of using kinetic energy of the fluid in pipelines to recharge sensor nodes and partition strategy for energy consumption balance based on the wireless communication technology of magnetic induction waveguide were proposed, which can effectively improve the energy performance and connectivity of the network, and provide theoretical guidance and practical basis for the monitoring of long oil and gas pipeline network, the city tap water pipe network and sewage pipe network.
Surface integrity analysis of abrasive water jet-cut surfaces of friction stir welded joints
Czech Academy of Sciences Publication Activity Database
Kumar, R.; Chattopadhyaya, S.; Dixit, A. R.; Bora, B.; Zeleňák, Michal; Foldyna, Josef; Hloch, Sergej; Hlaváček, Petr; Ščučka, Jiří; Klich, Jiří; Sitek, Libor; Vilaca, P.
2017-01-01
Roč. 88, č. 5 (2017), s. 1687-1701 ISSN 0268-3768 R&D Projects: GA MŠk(CZ) LO1406; GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : friction stir welding (FSW) * abrasive water jet (AWJ) * optical profilometer * topography * surface roughness Subject RIV: JQ - Machines ; Tools OBOR OECD: Mechanical engineering Impact factor: 2.209, year: 2016 http://link.springer.com/article/10.1007/s00170-016-8776-0
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
A surface-integral-equation approach to the propagation of waves in EBG-based devices
Lancellotti, V.; Tijhuis, A.G.
2012-01-01
We combine surface integral equations with domain decomposition to formulate and (numerically) solve the problem of electromagnetic (EM) wave propagation inside finite-sized structures. The approach is of interest for (but not limited to) the analysis of devices based on the phenomenon of
DEFF Research Database (Denmark)
Buschard, Karsten; Bracey, Austin W.; McElroy, Daniel L.
2016-01-01
Background. Sulfatide is known to chaperone insulin crystallization within the pancreatic beta cell, but it is not known if this results from sulfatide being integrated inside the crystal structure or by binding the surface of the crystal. With this study, we aimed to characterize the molecular m...
Design and production of a new surface mount charge-integrating amplifier for CDF
Energy Technology Data Exchange (ETDEWEB)
Nelson, C.; Drake, G.
1991-12-31
We present our experiences in designing and producing 26,000 new charge-integrating amplifiers for CDF, using surface-mount components. The new amplifiers were needed to instrument 920 new 24-channel CDF RABBIT boards, which are replacing an older design rendered obsolete by increases in the collision rate. Important design considerations were frequency response, physical size and cost. 5 refs.
Design and production of a new surface mount charge-integrating amplifier for CDF
International Nuclear Information System (INIS)
Nelson, C.; Drake, G.
1991-01-01
We present our experiences in designing and producing 26,000 new charge-integrating amplifiers for CDF, using surface-mount components. The new amplifiers were needed to instrument 920 new 24-channel CDF RABBIT boards, which are replacing an older design rendered obsolete by increases in the collision rate. Important design considerations were frequency response, physical size and cost. 5 refs
Bagci, Hakan; Andriulli, Francesco P.; Cools, Kristof; Olyslager, Femke; Michielssen, Eric
2010-01-01
A well-conditioned coupled set of surface (S) and volume (V) electric field integral equations (S-EFIE and V-EFIE) for analyzing wave interactions with densely discretized composite structures is presented. Whereas the V-EFIE operator is well
Photonic integrated single-sideband modulator / frequency shifter based on surface acoustic waves
DEFF Research Database (Denmark)
Barretto, Elaine Cristina Saraiva; Hvam, Jørn Märcher
2010-01-01
Optical frequency shifters are essential components of many systems. In this paper, a compact integrated optical frequency shifter is designed making use of the combination of surface acoustic waves and Mach-Zehnder interferometers. It has a very simple operation setup and can be fabricated...
Directory of Open Access Journals (Sweden)
Z. Q. Gao
2011-01-01
Full Text Available Evapotranspiration (ET may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA. With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM, and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.
2015-12-01
Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.
Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S
2018-02-01
Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to
Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi
2007-08-01
To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.
DEFF Research Database (Denmark)
Andersen, Jesper H.; Aroviita, Jukka; Carstensen, Jacob
2016-01-01
We review approaches and tools currently used in Nordic countries (Denmark, Finland, Norway and Sweden) for integrated assessment of ‘ecological status’ sensu the EU Water Framework Directive as well as assessment of ‘eutrophication status’ in coastal and marine waters. Integration principles for...... principles applied within BQEs are critical and in need of harmonisation if we want a better understanding of potential transition in ecological status between surface water types, e.g. when riverine water enters a downstream lake or coastal water body.......We review approaches and tools currently used in Nordic countries (Denmark, Finland, Norway and Sweden) for integrated assessment of ‘ecological status’ sensu the EU Water Framework Directive as well as assessment of ‘eutrophication status’ in coastal and marine waters. Integration principles...
Directory of Open Access Journals (Sweden)
Jianfu Zhang
2015-09-01
Full Text Available Potassium dihydrogen phosphate is an important optical crystal. However, high-precision processing of large potassium dihydrogen phosphate crystal workpieces is difficult. In this article, surface roughness and subsurface damage characteristics of a (001 potassium dihydrogen phosphate crystal surface produced by traditional and rotary ultrasonic machining are studied. The influence of process parameters, including spindle speed, feed speed, type and size of sintered diamond wheel, ultrasonic power, and selection of cutting fluid on potassium dihydrogen phosphate crystal surface integrity, was analyzed. The surface integrity, especially the subsurface damage depth, was affected significantly by the ultrasonic power. Metal-sintered diamond tools with high granularity were most suitable for machining potassium dihydrogen phosphate crystal. Cutting fluid played a key role in potassium dihydrogen phosphate crystal machining. A more precise surface can be obtained in machining with a higher spindle speed, lower feed speed, and using kerosene as cutting fluid. Based on the provided optimized process parameters for machining potassium dihydrogen phosphate crystal, a processed surface quality with Ra value of 33 nm and subsurface damage depth value of 6.38 μm was achieved.
Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan
2018-05-01
Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.
Directory of Open Access Journals (Sweden)
Chuong Cheng-Ming
2006-10-01
Full Text Available Abstract Background Understanding research activity within any given biomedical field is important. Search outputs generated by MEDLINE/PubMed are not well classified and require lengthy manual citation analysis. Automation of citation analytics can be very useful and timesaving for both novices and experts. Results PubFocus web server automates analysis of MEDLINE/PubMed search queries by enriching them with two widely used human factor-based bibliometric indicators of publication quality: journal impact factor and volume of forward references. In addition to providing basic volumetric statistics, PubFocus also prioritizes citations and evaluates authors' impact on the field of search. PubFocus also analyses presence and occurrence of biomedical key terms within citations by utilizing controlled vocabularies. Conclusion We have developed citations' prioritisation algorithm based on journal impact factor, forward referencing volume, referencing dynamics, and author's contribution level. It can be applied either to the primary set of PubMed search results or to the subsets of these results identified through key terms from controlled biomedical vocabularies and ontologies. NCI (National Cancer Institute thesaurus and MGD (Mouse Genome Database mammalian gene orthology have been implemented for key terms analytics. PubFocus provides a scalable platform for the integration of multiple available ontology databases. PubFocus analytics can be adapted for input sources of biomedical citations other than PubMed.
Energy Technology Data Exchange (ETDEWEB)
Lundberg, Mattias, E-mail: mattias.lundberg@liu.se; Saarimäki, Jonas; Moverare, Johan J.; Calmunger, Mattias
2017-02-15
Machining of austenitic stainless steels can result in different surface integrities and different machining process parameters will have a great impact on the component fatigue life. Understanding how machining processes affect the cyclic behaviour and microstructure are of outmost importance in order to improve existing and new life estimation models. Milling and electrical discharge machining (EDM) have been used to manufacture rectangular four-point bend fatigue test samples; subjected to high cycle fatigue. Before fatigue testing, surface integrity characterisation of the two surface conditions was conducted using scanning electron microscopy, surface roughness, residual stress profiles, and hardness profiles. Differences in cyclic behaviour were observed between the two surface conditions by the fatigue testing. The milled samples exhibited a fatigue limit. EDM samples did not show the same behaviour due to ratcheting. Recrystallized nano sized grains were identified at the severely plastically deformed surface of the milled samples. Large amounts of bent mechanical twins were observed ~ 5 μm below the surface. Grain shearing and subsequent grain rotation from milling bent the mechanical twins. EDM samples showed much less plastic deformation at the surface. Surface tensile residual stresses of ~ 500 MPa and ~ 200 MPa for the milled and EDM samples respectively were measured. - Highlights: •Milled samples exhibit fatigue behaviour, but not EDM samples. •Four-point bending is not suitable for materials exhibiting pronounced ratcheting. •LAGB density can be used to quantitatively measure plastic deformation. •Grain shearing and rotation result in bent mechanical twins. •Nano sized grains evolve due to the heat of the operation.
Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu
2017-08-01
Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.
Directory of Open Access Journals (Sweden)
E. KARACABEY
2016-03-01
Full Text Available The objective of the present study was to investigate microwave-assisted drying of Jerusalem artichoke tubers to determine the effects of the processing conditions. Drying time (DT and effectivemoisture diffusivity (EMD were determined to evaluate the drying process in terms of dehydration performance, whereas the rehydration ratio (RhR was considered as a significant quality index. A pretreatment of soaking in a NaCl solution was applied before all trials. The output power of the microwave oven, slice thickness and NaCl concentration of the pretreatment solution werethe three investigated parameters. The drying process was accelerated by altering the conditions while obtaining a higher quality product. For optimization of the drying process, response surface methodology (RSM and genetic algorithms (GA were used. Model adequacy was evaluated for each corresponding mathematical expression developed for interested responses by RSM. The residual of the model obtained by GA was compared to that of the RSM model. The GA was successful in high-performance prediction and produced results similar to those of RSM. The analysis and results of the present study show that both RSM and GA models can be used in cohesion to gain insight into the bioprocessing system.
Effects of titanium surface topography on bone integration: a systematic review.
Wennerberg, Ann; Albrektsson, Tomas
2009-09-01
To analyse possible effects of titanium surface topography on bone integration. Our analyses were centred on a PubMed search that identified 1184 publications of assumed relevance; of those, 1064 had to be disregarded because they did not accurately present in vivo data on bone response to surface topography. The remaining 120 papers were read and analysed, after removal of an additional 20 papers that mainly dealt with CaP-coated and Zr implants; 100 papers remained and formed the basis for this paper. The bone response to differently configurated surfaces was mainly evaluated by histomorphometry (bone-to-implant contact), removal torque and pushout/pullout tests. A huge number of the experimental investigations have demonstrated that the bone response was influenced by the implant surface topography; smooth (S(a)1-2 microm) surfaces showed stronger bone responses than rough (S(a)>2 microm) in some studies. One limitation was that it was difficult to compare many studies because of the varying quality of surface evaluations; a surface termed 'rough' in one study was not uncommonly referred to as 'smooth' in another; many investigators falsely assumed that surface preparation per se identified the roughness of the implant; and many other studies used only qualitative techniques such as SEM. Furthermore, filtering techniques differed or only height parameters (S(a), R(a)) were reported. * Surface topography influences bone response at the micrometre level. * Some indications exist that surface topography influences bone response at the nanometre level. * The majority of published papers present an inadequate surface characterization. * Measurement and evaluation techniques need to be standardized. * Not only height descriptive parameters but also spatial and hybrid ones should be used.
Bashir, K.; Alkali, A. U.; Elmunafi, M. H. S.; Yusof, N. M.
2018-04-01
Recent trend in turning hardened materials have gained popularity because of its immense machinability benefits. However, several machining processes like thermal assisted machining and cryogenic machining have reveal superior machinability benefits over conventional dry turning of hardened materials. Various engineering materials have been studied. However, investigations on AISI O1 tool steel have not been widely reported. In this paper, surface finish and surface integrity dominant when hard turning AISI O1 tool steel is analysed. The study is focused on the performance of wiper coated ceramic tool with respect to surface roughness and surface integrity of hardened tool steel. Hard turned tool steel was machined at varying cutting speed of 100, 155 and 210 m/min and feed rate of 0.05, 0.125 and 0.20mm/rev. The depth of cut of 0.2mm was maintained constant throughout the machining trials. Machining was conducted using dry turning on 200E-axis CNC lathe. The experimental study revealed that the surface finish is relatively superior at higher cutting speed of 210m/min. The surface finish increases when cutting speed increases whereas surface finish is generally better at lower feed rate of 0.05mm/rev. The experimental study conducted have revealed that phenomena such as work piece vibration due to poor or improper mounting on the spindle also contributed to higher surface roughness value of 0.66Ra during turning at 0.2mm/rev. Traces of white layer was observed when viewed with optical microscope which shows evidence of cutting effects on the turned work material at feed rate of 0.2 rev/min
Energy Technology Data Exchange (ETDEWEB)
Xiong, Yan [School of Chemistry and Chemical Engineering, Southwest Petroleum University, Chengdu 610500 (China); Oil and Gas Field Applied Chemistry Key Laboratory of Sichuan Province, Southwest Petroleum University, Chengdu, 610500 (China); Tan, Jun; Wang, Chengjie; Zhu, Ying [School of Chemistry and Chemical Engineering, Southwest Petroleum University, Chengdu 610500 (China); Fang, Shenwen [School of Chemistry and Chemical Engineering, Southwest Petroleum University, Chengdu 610500 (China); Oil and Gas Field Applied Chemistry Key Laboratory of Sichuan Province, Southwest Petroleum University, Chengdu, 610500 (China); Wu, Jiayi; Wang, Qing [School of Chemistry and Chemical Engineering, Southwest Petroleum University, Chengdu 610500 (China); Duan, Ming, E-mail: swpua124@126.com [State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation, Southwest Petroleum University, Chengdu 610500 (China); School of Chemistry and Chemical Engineering, Southwest Petroleum University, Chengdu 610500 (China); Oil and Gas Field Applied Chemistry Key Laboratory of Sichuan Province, Southwest Petroleum University, Chengdu, 610500 (China)
2016-11-15
In this work, a miniaturized sensor was integrated on fiber surface and developed for oxygen determination through evanescent-wave induced fluorescence quenching. The sensor was designed by using light emitting diode (LED) as light source and optical fiber as light transmission element. Tris(2,2′-bipyridyl) ruthenium ([Ru(bpy){sub 3}]{sup 2+}) fluorophore was immobilized in the organically modified silicates (ORMOSILs) film and coated onto the fiber surface. When light propagated by total internal reflection (TIR) in the fiber core, evanescent wave could be produced on the fiber surface and excite [Ru(bpy){sub 3}]{sup 2+} fluorophore to produce fluorescence emission. Then oxygen could be determinated by its quenching effect on the fluorescence and its concentration could be evaluated according to Stern–Volumer model. Through integrating evanescent wave excitation and fluorescence quenching on fiber surface, the sensor was successfully miniaturized and exhibit improved performances of high sensitivity (1.4), excellent repeatability (1.2%) and fast analysis (12 s) for oxygen determination. The sensor provided a newly portable method for in-situ and real-time measurement of oxygen and showed potential for practical oxygen analysis in different application fields. Furthermore, the fabrication of this sensor provides a miniaturized and portable detection platform for species monitoring by simple modular design. - Highlights: • ORMOSILs sensing film immobilized with [Ru(bpy){sub 3}]{sup 2+} fluorophore was coated on fiber surface. • Evanescent wave on the fiber surface was utilized as excitation source to produce fluorescence. • Oxygen was measured based on its quenching effect on evanescent wave-induce fluorescence. • Sensor fabrication was miniaturized by integrating detection and sensing elements on the fiber. • The modular design sensor provides a detection platform for other species monitoring.
Association of lipids with integral membrane surface proteins of Mycoplasma hyorhinis
International Nuclear Information System (INIS)
Bricker, T.M.; Boyer, M.J.; Keith, J.; Watson-McKown, R.; Wise, K.S.
1988-01-01
Triton X-114 (TX-114)-phase fractionation was used to identify and characterize integral membrane surface proteins of the wall-less procaryote Mycoplasma hyorhinis GDL. Phase fractionation of mycoplasmas followed by analysis by sodium dodecyl sulfate-polyacrylamide gel electrophoresis revealed selective partitioning of approximately 30 [ 35 S]methionine-labeled intrinsic membrane proteins into the TX-114 phase. Similar analysis of [ 3 H]palmitate-labeled cells showed that approximately 20 proteins of this organism were associated with lipid, all of which also efficiently partitioned as integral membrane components into the detergent phase. Immunoblotting and immunoprecipitation of TX-114-phase proteins from 125 I-surface-labeled cells with four monoclonal antibodies to distinct surface epitopes of M. hyorhinis identified surface proteins p120, p70, p42, and p23 as intrinsic membrane components. Immunoprecipitation of [ 3 H]palmitate-labeled TX-114-phase proteins further established that surface proteins p120, p70, and p23 (a molecule that mediates complement-dependent mycoplasmacidal monoclonal antibody activity) were among the lipid-associated proteins of this organism. Two of these proteins, p120 and p123, were acidic (pI less than or equal to 4.5), as shown by two-dimensional isoelectric focusing. This study established that M. hyorhinis contains an abundance of integral membrane proteins tightly associated with lipids and that many of these proteins are exposed at the external surface of the single limiting plasma membrane. Monoclonal antibodies are reported that will allow detailed analysis of the structure and processing of lipid-associated mycoplasma proteins
An Integrated Transcriptome-Wide Analysis of Cave and Surface Dwelling Astyanax mexicanus
Gross, Joshua B.; Furterer, Allison; Carlson, Brian M.; Stahl, Bethany A.
2013-01-01
Numerous organisms around the globe have successfully adapted to subterranean environments. A powerful system in which to study cave adaptation is the freshwater characin fish, Astyanax mexicanus. Prior studies in this system have established a genetic basis for the evolution of numerous regressive traits, most notably vision and pigmentation reduction. However, identification of the precise genetic alterations that underlie these morphological changes has been delayed by limited genetic and genomic resources. To address this, we performed a transcriptome analysis of cave and surface dwelling Astyanax morphs using Roche/454 pyrosequencing technology. Through this approach, we obtained 576,197 Pachón cavefish-specific reads and 438,978 surface fish-specific reads. Using this dataset, we assembled transcriptomes of cave and surface fish separately, as well as an integrated transcriptome that combined 1,499,568 reads from both morphotypes. The integrated assembly was the most successful approach, yielding 22,596 high quality contiguous sequences comprising a total transcriptome length of 21,363,556 bp. Sequence identities were obtained through exhaustive blast searches, revealing an adult transcriptome represented by highly diverse Gene Ontology (GO) terms. Our dataset facilitated rapid identification of sequence polymorphisms between morphotypes. These data, along with positional information collected from the Danio rerio genome, revealed several syntenic regions between Astyanax and Danio. We demonstrated the utility of this positional information through a QTL analysis of albinism in a surface x Pachón cave F2 pedigree, using 65 polymorphic markers identified from our integrated assembly. We also adapted our dataset for an RNA-seq study, revealing many genes responsible for visual system maintenance in surface fish, whose expression was not detected in adult Pachón cavefish. Conversely, several metabolism-related genes expressed in cavefish were not detected in
Redatuming borehole-to-surface electromagnetic data using Stratton-Chu integral transforms
DEFF Research Database (Denmark)
Zhdanov, Michael; Cai, Hongzhu
2012-01-01
We present a new method of analyzing borehole-to-surface electromagnetic (BSEM) survey data based on redatuming of the observed data from receivers distributed over the surface of the earth onto virtual receivers located within the subsurface. The virtual receivers can be placed close to the target...... of interest, such as just above a hydrocarbon reservoir, which increases the sensitivity of the EM data to the target. The method is based on the principles of downward analytical continuation of EM fields. We use Stratton-Chu type integral transforms to calculate the EM fields at the virtual receivers. Model...
Chip formation and surface integrity in high-speed machining of hardened steel
Kishawy, Hossam Eldeen A.
Increasing demands for high production rates as well as cost reduction have emphasized the potential for the industrial application of hard turning technology during the past few years. Machining instead of grinding hardened steel components reduces the machining sequence, the machining time, and the specific cutting energy. Hard turning Is characterized by the generation of high temperatures, the formation of saw toothed chips, and the high ratio of thrust to tangential cutting force components. Although a large volume of literature exists on hard turning, the change in machined surface physical properties represents a major challenge. Thus, a better understanding of the cutting mechanism in hard turning is still required. In particular, the chip formation process and the surface integrity of the machined surface are important issues which require further research. In this thesis, a mechanistic model for saw toothed chip formation is presented. This model is based on the concept of crack initiation on the free surface of the workpiece. The model presented explains the mechanism of chip formation. In addition, experimental investigation is conducted in order to study the chip morphology. The effect of process parameters, including edge preparation and tool wear on the chip morphology, is studied using Scanning Electron Microscopy (SEM). The dynamics of chip formation are also investigated. The surface integrity of the machined parts is also investigated. This investigation focusses on residual stresses as well as surface and sub-surface deformation. A three dimensional thermo-elasto-plastic finite element model is developed to predict the machining residual stresses. The effect of flank wear is introduced during the analysis. Although residual stresses have complicated origins and are introduced by many factors, in this model only the thermal and mechanical factors are considered. The finite element analysis demonstrates the significant effect of the heat generated
International Nuclear Information System (INIS)
Liao, Gwo-Ching
2011-01-01
An optimization algorithm is proposed in this paper to solve the economic dispatch problem that includes wind farm using the Chaotic Quantum Genetic Algorithm (CQGA). In addition to the detailed models of economic dispatch introduction and their associated constraints, the wind power effect is also included in this paper. The chaotic quantum genetic algorithm used to solve the economic dispatch process and discussed with real scenarios used for the simulation tests. After comparing the proposed algorithm with several other algorithms commonly used to solve optimization problems, the results show that the proposed algorithm is able to find the optimal solution quickly and accurately (i.e. to obtain the minimum cost for power generation in the shortest time). At the end, the impact to the total cost savings for power generation after adding (or not adding) wind power generation is also discussed. The actual implementation results prove that the proposed algorithm is economical, fast and practical. They are quite valuable for further research. -- Research highlights: → Quantum Genetic Algorithm can effectively improve the global search ability. → It can achieve the real objective of the global optimal solutions. → The CPU computation time is less than that other algorithms adopted in this paper.
Integrating Satellite, Radar and Surface Observation with Time and Space Matching
Ho, Y.; Weber, J.
2015-12-01
The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.
Measurement of integrated coefficients of ultracold neutron reflection from solid surfaces
International Nuclear Information System (INIS)
Golikov, V.V.; Kulagin, E.N.; Nikitenko, Yu.V.
1985-01-01
The method of measurement of the integrated coefficients of ultracold neutrons (UCN) reflection from solid surfaces is reported. A simple formula is suggested which expresses the integrated coefficients of UCN reflection from a given sample through the measured counting rate of the detector with and without strong absorber (polyethelene). The parameters are determined describing anisotropic and inhomogeneity properties of UCN reflection from Al, Mg, Pb, Zn, Mo, stainless steel, T and V are measured. The thickness of oxide layers is determined within the 5-10A accuracy limits from the experimental coefficients of UCN reflection from metals having on their surfaces the oxides with boundary velocity larger than that for the metal. It has been determined that the density of 5000 A layer of heavy ice freezed on aluminium is 0.83 +- 0.05 from the crystal ice density
International Nuclear Information System (INIS)
Zhong Jian; Dong Gang; Sun Yimei; Zhang Zhaoyang; Wu Yuqin
2016-01-01
The present work reports the development of nonlinear time series prediction method of genetic algorithm (GA) with singular spectrum analysis (SSA) for forecasting the surface wind of a point station in the South China Sea (SCS) with scatterometer observations. Before the nonlinear technique GA is used for forecasting the time series of surface wind, the SSA is applied to reduce the noise. The surface wind speed and surface wind components from scatterometer observations at three locations in the SCS have been used to develop and test the technique. The predictions have been compared with persistence forecasts in terms of root mean square error. The predicted surface wind with GA and SSA made up to four days (longer for some point station) in advance have been found to be significantly superior to those made by persistence model. This method can serve as a cost-effective alternate prediction technique for forecasting surface wind of a point station in the SCS basin. (paper)
Directory of Open Access Journals (Sweden)
G. Krolczyk
2015-01-01
Full Text Available The article presents the influence of machining parameters on the microhardness of surface integrity (SI after turning by means of a coated sintered carbide wedge with a coating with ceramic intermediate layer. The investigation comprised the influence of cutting speed on the SI microhardness in dry machining. The material under investigation was duplex stainless steel with two-phase ferritic-austenitic structure. The results obtained allow for conclusions concerning the exploitation features of processed machine parts.
DEFF Research Database (Denmark)
Kim, Oleksiy S.; Meincke, Peter; Breinbjerg, Olav
2007-01-01
The problem of electromagnetic scattering by composite metallic and dielectric objects is solved using the coupled volume-surface integral equation (VSIE). The method of moments (MoM) based on higher-order hierarchical Legendre basis functions and higher-order curvilinear geometrical elements...... with the analytical Mie series solution. Scattering by more complex metal-dielectric objects are also considered to compare the presented technique with other numerical methods....
Optimization of surface roughness in CNC end milling using ...
African Journals Online (AJOL)
International Journal of Engineering, Science and Technology ... In this study, minimization of surface roughness has been investigated by integrating design of experiment method, Response surface methodology (RSM) and genetic algorithm ...
Near-station terrain corrections for gravity data by a surface-integral technique
Gettings, M.E.
1982-01-01
A new method of computing gravity terrain corrections by use of a digitizer and digital computer can result in substantial savings in the time and manual labor required to perform such corrections by conventional manual ring-chart techniques. The method is typically applied to estimate terrain effects for topography near the station, for example within 3 km of the station, although it has been used successfully to a radius of 15 km to estimate corrections in areas where topographic mapping is poor. Points (about 20) that define topographic maxima, minima, and changes in the slope gradient are picked on the topographic map, within the desired radius of correction about the station. Particular attention must be paid to the area immediately surrounding the station to ensure a good topographic representation. The horizontal and vertical coordinates of these points are entered into the computer, usually by means of a digitizer. The computer then fits a multiquadric surface to the input points to form an analytic representation of the surface. By means of the divergence theorem, the gravity effect of an interior closed solid can be expressed as a surface integral, and the terrain correction is calculated by numerical evaluation of the integral over the surfaces of a cylinder, The vertical sides of which are at the correction radius about the station, the flat bottom surface at the topographic minimum, and the upper surface given by the multiquadric equation. The method has been tested with favorable results against models for which an exact result is available and against manually computed field-station locations in areas of rugged topography. By increasing the number of points defining the topographic surface, any desired degree of accuracy can be obtained. The method is more objective than manual ring-chart techniques because no average compartment elevations need be estimated ?
Statistical characteristics of surface integrity by fiber laser cutting of Nitinol vascular stents
International Nuclear Information System (INIS)
Fu, C.H.; Liu, J.F.; Guo, Andrew
2015-01-01
Graphical abstract: - Highlights: • Precision kerf with tight tolerance of Nitinol stents can be cut by fiber laser. • No HAZ in the subsurface was detected due to large grain size. • Recast layer has lower hardness than the bulk. • Laser cutting speed has a higher influence on surface integrity than laser power. - Abstract: Nitinol alloys have been widely used in manufacturing of vascular stents due to the outstanding properties such as superelasticity, shape memory, and superior biocompatibility. Laser cutting is the dominant process for manufacturing Nitinol stents. Conventional laser cutting usually produces unsatisfactory surface integrity which has a significant detrimental impact on stent performance. Emerging as a competitive process, fiber laser with high beam quality is expected to produce much less thermal damage such as striation, dross, heat affected zone (HAZ), and recast layer. To understand the process capability of fiber laser cutting of Nitinol alloy, a design-of-experiment based laser cutting experiment was performed. The kerf geometry, roughness, topography, microstructure, and hardness were studied to better understand the nature of the HAZ and recast layer in fiber laser cutting. Moreover, effect size analysis was conducted to investigate the relationship between surface integrity and process parameters.
Statistical characteristics of surface integrity by fiber laser cutting of Nitinol vascular stents
Energy Technology Data Exchange (ETDEWEB)
Fu, C.H., E-mail: cfu5@crimson.ua.edu [Dept of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487 (United States); Liu, J.F. [Dept of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487 (United States); Guo, Andrew [Dept of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487 (United States); College of Arts and Science, Vanderbilt University, Nashville, TN 37235 (United States)
2015-10-30
Graphical abstract: - Highlights: • Precision kerf with tight tolerance of Nitinol stents can be cut by fiber laser. • No HAZ in the subsurface was detected due to large grain size. • Recast layer has lower hardness than the bulk. • Laser cutting speed has a higher influence on surface integrity than laser power. - Abstract: Nitinol alloys have been widely used in manufacturing of vascular stents due to the outstanding properties such as superelasticity, shape memory, and superior biocompatibility. Laser cutting is the dominant process for manufacturing Nitinol stents. Conventional laser cutting usually produces unsatisfactory surface integrity which has a significant detrimental impact on stent performance. Emerging as a competitive process, fiber laser with high beam quality is expected to produce much less thermal damage such as striation, dross, heat affected zone (HAZ), and recast layer. To understand the process capability of fiber laser cutting of Nitinol alloy, a design-of-experiment based laser cutting experiment was performed. The kerf geometry, roughness, topography, microstructure, and hardness were studied to better understand the nature of the HAZ and recast layer in fiber laser cutting. Moreover, effect size analysis was conducted to investigate the relationship between surface integrity and process parameters.
Yuste, Valentin; Delgado, Julio; Agullo, Alberto; Sampietro, Jose Mauel
2017-06-01
Burns of the first commissure of the hand can evolve into an adduction contracture of the thumb. We decided to conduct a review of the existing literature on the treatment of full-thickness burns of the first commissure in order to develop a treatment algorithm that integrates the various currently available procedures. A search of the existing literature was conducted, focusing on the treatment of a burn of the first commissure in its chronic and acute phases. A total of 29 relevant articles were selected; 24 focused exclusively on the chronic contracture stage, while 3 focused exclusively on the acute burn stage, and 2 articles studied both stages. A therapeutic algorithm for full-thickness burns of the first commissure of the hand was developed. With this algorithm we sought to relate each degree and stage of the burn with a treatment. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.
Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R
2006-12-15
We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.
Influence of steel implant surface microtopography on soft and hard tissue integration.
Hayes, J S; Klöppel, H; Wieling, R; Sprecher, C M; Richards, R G
2018-02-01
After implantation of an internal fracture fixation device, blood contacts the surface, followed by protein adsorption, resulting in either soft-tissue adhesion or matrix adhesion and mineralization. Without protein adsorption and cell adhesion under the presence of micro-motion, fibrous capsule formation can occur, often surrounding a liquid filled void at the implant-tissue interface. Clinically, fibrous capsule formation is more prevalent with electropolished stainless steel (EPSS) plates than with current commercially pure titanium (cpTi) plates. We hypothesize that this is due to lack of micro-discontinuities on the standard EPSS plates. To test our hypothesis, four EPSS experimental surfaces with varying microtopographies were produced and characterized for morphology using the scanning electron microscope, quantitative roughness analysis using laser profilometry and chemical analysis using X-ray photoelectron spectroscopy. Clinically used EPSS (smooth) and cpTi (microrough) were included as controls. Six plates of each type were randomly implanted, one on both the left and right intact tibia of 18 white New Zealand rabbits for 12 weeks, to allow for a surface interface study. The results demonstrate that the micro-discontinuities on the upper surface of internal steel fixation plates reduced the presence of liquid filled voids within soft-tissue capsules. The micro-discontinuities on the plate under-surface increased bony integration without the presence of fibrous tissue interface. These results support the hypothesis that the fibrous capsule and the liquid filled void formation occurs mainly due to lack of micro-discontinuities on the polished smooth steel plates and that bony integration is increased to surfaces with higher amounts of micro-discontinuities. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 705-715, 2018. © 2017 Wiley Periodicals, Inc.
Study of pollutant transport in surface boundary layer by generalized integral transform technique
International Nuclear Information System (INIS)
Guerrero, Jesus S.P.; Heilbron Filho, Paulo F.L.; Pimentel, Luiz C.G.; Cataldi, Marcio
2001-01-01
A theoretical study was developed to obtain solutions of the atmospheric diffusion equation for various point source, considering radioactive decay and axial diffusion, under neutral atmospheric conditions. It was used an algebraic turbulence model available in the literature, based on Monin-Obukhov similarity theory, for the representation of the turbulent transport in the vertical direction, in the longitudinal directions was considered a constant mass eddy diffusivity . The bi-dimensional transient partial differential equation, representative of the physical phenomena, was transformed into a coupled one-dimensional transient equation system by applying the Generalized Integral Transform Technique. The coupled system was solved numerically using a subroutine based in the lines method. In order to evaluate the computational algorithm were analyzed some representative physical situations. (author)
Conroy-Beam, Daniel; Buss, David M
2016-01-01
Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.
Directory of Open Access Journals (Sweden)
Daniel Conroy-Beam
Full Text Available Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294 we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.
Integrated control of the cooling system and surface openings using the artificial neural networks
International Nuclear Information System (INIS)
Moon, Jin Woo
2015-01-01
This study aimed at suggesting an indoor temperature control method that can provide a comfortable thermal environment through the integrated control of the cooling system and the surface openings. Four control logic were developed, employing different application levels of rules and artificial neural network models. Rule-based control methods represented the conventional approach while ANN-based methods were applied for the predictive and adaptive controls. Comparative performance tests for the conventional- and ANN-based methods were numerically conducted for the double-skin-facade building, using the MATLAB (Matrix Laboratory) and TRNSYS (Transient Systems Simulation) software, after proving the validity by comparing the simulation and field measurement results. Analysis revealed that the ANN-based controls of the cooling system and surface openings improved the indoor temperature conditions with increased comfortable temperature periods and decreased standard deviation of the indoor temperature from the center of the comfortable range. In addition, the proposed ANN-based logic effectively reduced the number of operating condition changes of the cooling system and surface openings, which can prevent system failure. The ANN-based logic, however, did not show superiority in energy efficiency over the conventional logic. Instead, they have increased the amount of heat removal by the cooling system. From the analysis, it can be concluded that the ANN-based temperature control logic was able to keep the indoor temperature more comfortably and stably within the comfortable range due to its predictive and adaptive features. - Highlights: • Integrated rule-based and artificial neural network based logics were developed. • A cooling device and surface openings were controlled in an integrated manner. • Computer simulation method was employed for comparative performance tests. • ANN-based logics showed the advanced features of thermal environment. • Rule
Lyu, Pengfei; Ando, Makoto
2017-09-01
The modified edge representation is one of the equivalent edge currents approximation methods for calculating the physical optics surface radiation integrals in diffraction analysis. The Stokes' theorem is used in the derivation of the modified edge representation from the physical optics for the planar scatterer case, which implies that the surface integral is rigorously reduced into the line integral of the modified edge representation equivalent edge currents, defined in terms of the local shape of the edge. On the contrary, for curved surfaces, the results of radiation integrals depend upon the global shape of the scatterer. The physical optics surface integral consists of two components, from the inner stationary phase point and the edge. The modified edge representation is defined independently from the orientation of the actual edge, and therefore, it could be available not only at the edge but also at the arbitrary points on the scatterer except the stationary phase point where the modified edge representation equivalent edge currents becomes infinite. If stationary phase point exists inside the illuminated region, the physical optics surface integration is reduced into two kinds of the modified edge representation line integrations, along the edge and infinitesimally small integration around the inner stationary phase point, the former and the latter give the diffraction and reflection components, respectively. The accuracy of the latter has been discussed for the curved surfaces and published. This paper focuses on the errors of the former and discusses its correction. It has been numerically observed that the modified edge representation works well for the physical optics diffraction in flat and concave surfaces; errors appear especially for the observer near the reflection shadow boundary if the frequency is low for the convex scatterer. This paper gives the explicit expression of the higher-order correction for the modified edge representation.
Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.
2017-05-01
This paper describes the second part of a series of investigation to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from the future hyperspectral and geostationary satellite sensors such as Tropospheric Emissions: Monitoring of POllution (TEMPO). The information content in these hyperspectral measurements is analyzed for 6 principal components (PCs) of surface spectra and a total of 14 aerosol parameters that describe the columnar aerosol volume Vtotal, fine-mode aerosol volume fraction, and the size distribution and wavelength-dependent index of refraction in both coarse and fine mode aerosols. Forward simulations of atmospheric radiative transfer are conducted for 5 surface types (green vegetation, bare soil, rangeland, concrete and mixed surface case) and a wide range of aerosol mixtures. It is shown that the PCs of surface spectra in the atmospheric window channel could be derived from the top-of-the-atmosphere reflectance in the conditions of low aerosol optical depth (AOD ≤ 0.2 at 550 nm), with a relative error of 1%. With degree freedom for signal analysis and the sequential forward selection method, the common bands for different aerosol mixture types and surface types can be selected for aerosol retrieval. The first 20% of our selected bands accounts for more than 90% of information content for aerosols, and only 4 PCs are needed to reconstruct surface reflectance. However, the information content in these common bands from each TEMPO individual observation is insufficient for the simultaneous retrieval of surface's PC weight coefficients and multiple aerosol parameters (other than Vtotal). In contrast, with multiple observations for the same location from TEMPO in multiple consecutive days, 1-3 additional aerosol parameters could be retrieved. Consequently, a self-adjustable aerosol retrieval algorithm to account for surface types, AOD conditions, and multiple-consecutive observations is recommended to derive
Phojanamongkolkij, Nipa; Okuniek, Nikolai; Lohr, Gary W.; Schaper, Meilin; Christoffels, Lothar; Latorella, Kara A.
2014-01-01
The runway is a critical resource of any air transport system. It is used for arrivals, departures, and for taxiing aircraft and is universally acknowledged as a constraining factor to capacity for both surface and airspace operations. It follows that investigation of the effective use of runways, both in terms of selection and assignment as well as the timing and sequencing of the traffic is paramount to the efficient traffic flows. Both the German Aerospace Center (DLR) and NASA have developed concepts and tools to improve atomic aspects of coordinated arrival/departure/surface management operations and runway configuration management. In December 2012, NASA entered into a Collaborative Agreement with DLR. Four collaborative work areas were identified, one of which is called "Runway Management." As part of collaborative research in the "Runway Management" area, which is conducted with the DLR Institute of Flight Guidance, located in Braunschweig, the goal is to develop an integrated system comprised of the three DLR tools - arrival, departure, and surface management (collectively referred to as A/D/S-MAN) - and NASA's tactical runway configuration management (TRCM) tool. To achieve this goal, it is critical to prepare a concept of operations (ConOps) detailing how the NASA runway management and DLR arrival, departure, and surface management tools will function together to the benefit of each. To assist with the preparation of the ConOps, the integrated NASA and DLR tools are assessed through a functional analysis method described in this report. The report first provides the highlevel operational environments for air traffic management (ATM) in Germany and in the U.S., and the descriptions of the DLR's A/D/S-MAN and NASA's TRCM tools at the level of details necessary to compliment the purpose of the study. Functional analyses of each tool and a completed functional analysis of an integrated system design are presented next in the report. Future efforts to fully
Technical Note: Reducing the spin-up time of integrated surface water–groundwater models
Ajami, H.
2014-06-26
One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.
Directory of Open Access Journals (Sweden)
Laura Grisotto
2016-04-01
Full Text Available In this paper the focus is on environmental statistics, with the aim of estimating the concentration surface and related uncertainty of an air pollutant. We used air quality data recorded by a network of monitoring stations within a Bayesian framework to overcome difficulties in accounting for prediction uncertainty and to integrate information provided by deterministic models based on emissions meteorology and chemico-physical characteristics of the atmosphere. Several authors have proposed such integration, but all the proposed approaches rely on representativeness and completeness of existing air pollution monitoring networks. We considered the situation in which the spatial process of interest and the sampling locations are not independent. This is known in the literature as the preferential sampling problem, which if ignored in the analysis, can bias geostatistical inferences. We developed a Bayesian geostatistical model to account for preferential sampling with the main interest in statistical integration and uncertainty. We used PM10 data arising from the air quality network of the Environmental Protection Agency of Lombardy Region (Italy and numerical outputs from the deterministic model. We specified an inhomogeneous Poisson process for the sampling locations intensities and a shared spatial random component model for the dependence between the spatial location of monitors and the pollution surface. We found greater predicted standard deviation differences in areas not properly covered by the air quality network. In conclusion, in this context inferences on prediction uncertainty may be misleading when geostatistical modelling does not take into account preferential sampling.
Technical Note: Reducing the spin-up time of integrated surface water–groundwater models
Ajami, H.
2014-12-12
One of the main challenges in the application of coupled or integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth-to-water table (DTWT) distributions. One approach to reducing uncertainty in model initialization is to run the model recursively using either a single year or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of the spin-up procedure by using a combination of model simulations and an empirical DTWT function. The methodology is examined across two distinct catchments located in a temperate region of Denmark and a semi-arid region of Australia. Our results illustrate that the hybrid approach reduced the spin-up period required for an integrated groundwater–surface water–land surface model (ParFlow.CLM) by up to 50%. To generalize results to different climate and catchment conditions, we outline a methodology that is applicable to other coupled or integrated modeling frameworks when initialization from an equilibrium state is required.
Judge, Jasmeet; England, Anthony W.; Metcalfe, John R.; McNichol, David; Goodison, Barry E.
2008-01-01
In this study, a soil vegetation and atmosphere transfer (SVAT) model was linked with a microwave emission model to simulate microwave signatures for different terrain during summertime, when the energy and moisture fluxes at the land surface are strong. The integrated model, land surface process/radiobrightness (LSP/R), was forced with weather and initial conditions observed during a field experiment. It simulated the fluxes and brightness temperatures for bare soil and brome grass in the Northern Great Plains. The model estimates of soil temperature and moisture profiles and terrain brightness temperatures were compared with the observed values. Overall, the LSP model provides realistic estimates of soil moisture and temperature profiles to be used with a microwave model. The maximum mean differences and standard deviations between the modeled and the observed temperatures (canopy and soil) were 2.6 K and 6.8 K, respectively; those for the volumetric soil moisture were 0.9% and 1.5%, respectively. Brightness temperatures at 19 GHz matched well with the observations for bare soil, when a rough surface model was incorporated indicating reduced dielectric sensitivity to soil moisture by surface roughness. The brightness temperatures of the brome grass matched well with the observations indicating that a simple emission model was sufficient to simulate accurate brightness temperatures for grass typical of that region and surface roughness was not a significant issue for grass-covered soil at 19 GHz. Such integrated SVAT-microwave models allow for direct assimilation of microwave observations and can also be used to understand sensitivity of microwave signatures to changes in weather forcings and soil conditions for different terrain types.
Kim, J.; Ryu, Y.; Jiang, C.; Hwang, Y.
2016-12-01
Near surface sensors are able to acquire more reliable and detailed information with higher temporal resolution than satellite observations. Conventional near surface sensors usually work individually, and thus they require considerable manpower from data collection through information extraction and sharing. Recent advances of Internet of Things (IoT) provides unprecedented opportunities to integrate various low-cost sensors as an intelligent near surface observation system for monitoring ecosystem structure and functions. In this study, we developed a Smart Surface Sensing System (4S), which can automatically collect, transfer, process and analyze data, and then publish time series results on public-available website. The system is composed of micro-computer Raspberry pi, micro-controller Arduino, multi-spectral spectrometers made from Light Emitting Diode (LED), visible and near infrared cameras, and Internet module. All components are connected with each other and Raspberry pi intelligently controls the automatic data production chain. We did intensive tests and calibrations in-lab. Then, we conducted in-situ observations at a rice paddy field and a deciduous broadleaf forest. During the whole growth season, 4S obtained landscape images, spectral reflectance in red, green, blue, and near infrared, normalized difference vegetation index (NDVI), fraction of photosynthetically active radiation (fPAR), and leaf area index (LAI) continuously. Also We compared 4S data with other independent measurements. NDVI obtained from 4S agreed well with Jaz hyperspectrometer at both diurnal and seasonal scales (R2 = 0.92, RMSE = 0.059), and 4S derived fPAR and LAI were comparable to LAI-2200 and destructive measurements in both magnitude and seasonal trajectory. We believe that the integrated low-cost near surface sensor could help research community monitoring ecosystem structure and functions closer and easier through a network system.
Integration of CubeSat Systems with Europa Surface Exploration Missions
Erdoǧan, Enes; Inalhan, Gokhan; Kemal Üre, Nazım
2016-07-01
Recent studies show that there is a high probability that a liquid ocean exists under thick icy surface of Jupiter's Moon Europa. The findings also show that Europa has features that are similar to Earth, such as geological activities. As a result of these studies, Europa has promising environment of being habitable and currently there are many missions in both planning and execution level that target Europa. However, these missions usually involve extremely high budgets over extended periods of time. The objective of this talk is to argue that the mission costs can be reduced significantly by integrating CubeSat systems within Europa exploration missions. In particular, we introduce an integrated CubeSat-micro probe system, which can be used for measuring the size and depth of the hypothetical liquid ocean under the icy surface of Europa. The systems consist of an entry module that houses a CubeSat combined with driller measurement probes. Driller measurement probes deploy before the system hits the surface and penetrate the surface layers of Europa. Moreover, a micro laser probe could be used to examine the layers. This process enables investigation of the properties of the icy layer and the environment beneath the surface. Through examination of different scenarios and cost analysis of the components, we show that the proposed CubeSat systems has a significant potential to reduce the cost of the overall mission. Both subsystem requirements and launch prices of CubeSats are dramatically cheaper than currently used satellites. In addition, multiple CubeSats may be used to dominate wider area in space and they are expandable in face of potential failures. In this talk we discuss both the mission design and cost reduction aspects.
Liu, Wentao; Liu, Zhanqiang
2018-03-01
Machinability improvement of titanium alloy Ti-6Al-4V is a challenging work in academic and industrial applications owing to its low thermal conductivity, low elasticity modulus and high chemical affinity at high temperatures. Surface integrity of titanium alloys Ti-6Al-4V is prominent in estimating the quality of machined components. The surface topography (surface defects and surface roughness) and the residual stress induced by machining Ti-6Al-4V occupy pivotal roles for the sustainability of Ti-6Al-4V components. High-pressure coolant (HPC) is a potential choice in meeting the requirements for the manufacture and application of Ti-6Al-4V. This paper reviews the progress towards the improvements of Ti-6Al4V surface integrity under HPC. Various researches of surface integrity characteristics have been reported. In particularly, surface roughness, surface defects, residual stress as well as work hardening are investigated in order to evaluate the machined surface qualities. Several coolant parameters (including coolant type, coolant pressure and the injection position) deserve investigating to provide the guidance for a satisfied machined surface. The review also provides a clear roadmap for applications of HPC in machining Ti-6Al4V. Experimental studies and analysis are reviewed to better understand the surface integrity under HPC machining process. A distinct discussion has been presented regarding the limitations and highlights of the prospective for machining Ti-6Al4V under HPC.
Jeon, Ji-Hong; Park, Chan-Gi; Engel, Bernard
2014-01-01
Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA) and shuffled complex evolution (SCE-UA) algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA) model, which employs the curve number (SCS-CN) method. The performance of the two optimization methods was compared by automatically calibrating L-THI...
Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology
Allen, P. A.; Wells, D. N.
2013-01-01
No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.
Chremmos, Ioannis
2010-01-01
The scattering of a surface plasmon polariton (SPP) by a rectangular dielectric channel discontinuity is analyzed through a rigorous magnetic field integral equation method. The scattering phenomenon is formulated by means of the magnetic-type scalar integral equation, which is subsequently treated through an entire-domain Galerkin method of moments (MoM), based on a Fourier-series plane wave expansion of the magnetic field inside the discontinuity. The use of Green's function Fourier transform allows all integrations over the area and along the boundary of the discontinuity to be performed analytically, resulting in a MoM matrix with entries that are expressed as spectral integrals of closed-form expressions. Complex analysis techniques, such as Cauchy's residue theorem and the saddle-point method, are applied to obtain the amplitudes of the transmitted and reflected SPP modes and the radiated field pattern. Through numerical results, we examine the wavelength selectivity of transmission and reflection against the channel dimensions as well as the sensitivity to changes in the refractive index of the discontinuity, which is useful for sensing applications.
Optoelectronic integrated circuits utilising vertical-cavity surface-emitting semiconductor lasers
International Nuclear Information System (INIS)
Zakharov, S D; Fyodorov, V B; Tsvetkov, V V
1999-01-01
Optoelectronic integrated circuits with additional optical inputs/outputs, in which vertical-cavity surface-emitting (VCSE) lasers perform the data transfer functions, are considered. The mutual relationship and the 'affinity' between optical means for data transfer and processing, on the one hand, and the traditional electronic component base, on the other, are demonstrated in the case of implementation of three-dimensional interconnects with a high transmission capacity. Attention is drawn to the problems encountered when semiconductor injection lasers are used in communication lines. It is shown what role can be played by VCSE lasers in solving these problems. A detailed analysis is made of the topics relating to possible structural and technological solutions in the fabrication of single lasers and of their arrays, and also of the problems hindering integrating of lasers into emitter arrays. Considerable attention is given to integrated circuits with optoelectronic smart pixels. Various technological methods for vertical integration of GaAs VCSE lasers with the silicon substrate of a microcircuit (chip) are discussed. (review)
Total luminous flux measurement for flexible surface sources with an integrating sphere photometer
International Nuclear Information System (INIS)
Yu, Hsueh-Ling; Liu, Wen-Chun
2014-01-01
Applying an integrating sphere photometer for total luminous flux measurement is a widely used method. However, the measurement accuracy depends on the spatial uniformity of the integrating sphere, especially when the test sample has a different light distribution from that of the standard source. Therefore, spatial correction is needed to eliminate the effect caused by non-uniformity. To reduce the inconvenience of spatial correction but retain the measurement accuracy, a new type of working standard is designed for flexible and curved surface sources. Applying this new type standard source, the measurement deviation due to different orientations is reduced by an order of magnitude compared with using a naked incandescent lamp as the standard source. (paper)
International Nuclear Information System (INIS)
Vigneron, Audrey
2015-01-01
The thesis addresses the numerical simulation of non-destructive testing (NDT) using eddy currents, and more precisely the computation of induced electromagnetic fields by a transmitter sensor in a healthy part. This calculation is the first step of the modeling of a complete control process in the CIVA software platform developed at CEA LIST. Currently, models integrated in CIVA are restricted to canonical (modal computation) or axially-symmetric geometries. The need for more diverse and complex configurations requires the introduction of new numerical modeling tools. In practice the sensor may be composed of elements with different shapes and physical properties. The inspected parts are conductive and may contain dielectric or magnetic elements. Due to the cohabitation of different materials in one configuration, different regimes (static, quasi-static or dynamic) may coexist. Under the assumption of linear, isotropic and piecewise homogeneous material properties, the surface integral equation (SIE) approach allows to reduce a volume-based problem to an equivalent surface-based problem. However, the usual SIE formulations for the Maxwell's problem generally suffer from numerical noise in asymptotic situations, and especially at low frequencies. The objective of this study is to determine a version that is stable for a range of physical parameters typical of eddy-current NDT applications. In this context, a block-iterative scheme based on a physical decomposition is proposed for the computation of primary fields. This scheme is accurate and well-conditioned. An asymptotic study of the integral Maxwell's problem at low frequencies is also performed, allowing to establish the eddy-current integral problem as an asymptotic case of the corresponding Maxwell problem. (author) [fr
Surface integrity evaluation of brass CW614N after impact of acoustically\
Czech Academy of Sciences Publication Activity Database
Lehocká, D.; Klich, Jiří; Foldyna, Josef; Hloch, Sergej; Hvizdoš, P.; Fides, M.; Botko, F.; Cárach, J.
2016-01-01
Roč. 149, č. 149 (2016), s. 236-244 E-ISSN 1877-7058. [International Conference on Manufacturing Engineering and Materials, ICMEM 2016. Nový Smokovec, 06.06.2016-10.06.2016] R&D Projects: GA MŠk(CZ) LO1406; GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : pulsating water jet * surface integrity * mass material removal * brass * nanoindentation Subject RIV: JQ - Machines ; Tools http://www.sciencedirect.com/science/article/pii/S1877705816311705
Energy Technology Data Exchange (ETDEWEB)
Demming, Stefanie; Buettgenbach, Stephanus [Institute for Microtechnology (IMT), Technische Universitaet Braunschweig, Alte Salzdahlumer Strasse 203, 38124 Braunschweig (Germany); Hahn, Anne; Barcikowski, Stephan [Nanotechnology Department, Laser Zentrum Hannover e.V. (LZH), Hollerithallee 8, 30419 Hannover (Germany); Edlich, Astrid; Franco-Lara, Ezequiel; Krull, Rainer [Institute of Biochemical Engineering (IBVT), Technische Universitaet Braunschweig, Gaussstrasse 17, 38106 Braunschweig (Germany)
2010-04-15
The mergence of microfluidics and nanocomposite materials and their in situ structuring leads to a higher integration level within microsystems technology. Nanoparticles (Cu and Ag) produced via laser radiation were suspended in Poly(dimethylsiloxane) to permanently modify surface material. A microstructuring process was implemented which allows the incorporation of these nanomaterials globally or partially at defined locations within a microbioreactor (MBR) for the determination of their antiseptic and toxic effects on the growth of biomass. Partially structured PDMS with nanoparticle-PDMS composite. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
Bagci, Hakan
2010-08-01
A well-conditioned coupled set of surface (S) and volume (V) electric field integral equations (S-EFIE and V-EFIE) for analyzing wave interactions with densely discretized composite structures is presented. Whereas the V-EFIE operator is well-posed even when applied to densely discretized volumes, a classically formulated S-EFIE operator is ill-posed when applied to densely discretized surfaces. This renders the discretized coupled S-EFIE and V-EFIE system ill-conditioned, and its iterative solution inefficient or even impossible. The proposed scheme regularizes the coupled set of S-EFIE and V-EFIE using a Calderón multiplicative preconditioner (CMP)-based technique. The resulting scheme enables the efficient analysis of electromagnetic interactions with composite structures containing fine/subwavelength geometric features. Numerical examples demonstrate the efficiency of the proposed scheme. © 2006 IEEE.
Pandey, Dharmendra K.; Maity, Saroj; Bhattacharya, Bimal; Misra, Arundhati
2016-05-01
Accurate measurement of surface soil moisture of bare and vegetation covered soil over agricultural field and monitoring the changes in surface soil moisture is vital for estimation for managing and mitigating risk to agricultural crop, which requires information and knowledge to assess risk potential and implement risk reduction strategies and deliver essential responses. The empirical and semi-empirical model-based soil moisture inversion approach developed in the past are either sensor or region specific, vegetation type specific or have limited validity range, and have limited scope to explain physical scattering processes. Hence, there is need for more robust, physical polarimetric radar backscatter model-based retrieval methods, which are sensor and location independent and have wide range of validity over soil properties. In the present study, Integral Equation Model (IEM) and Vector Radiative Transfer (VRT) model were used to simulate averaged backscatter coefficients in various soil moisture (dry, moist and wet soil), soil roughness (smooth to very rough) and crop conditions (low to high vegetation water contents) over selected regions of Gujarat state of India and the results were compared with multi-temporal Radar Imaging Satellite-1 (RISAT-1) C-band Synthetic Aperture Radar (SAR) data in σ°HH and σ°HV polarizations, in sync with on field measured soil and crop conditions. High correlations were observed between RISAT-1 HH and HV with model simulated σ°HH & σ°HV based on field measured soil with the coefficient of determination R2 varying from 0.84 to 0.77 and RMSE varying from 0.94 dB to 2.1 dB for bare soil. Whereas in case of winter wheat crop, coefficient of determination R2 varying from 0.84 to 0.79 and RMSE varying from 0.87 dB to 1.34 dB, corresponding to with vegetation water content values up to 3.4 kg/m2. Artificial Neural Network (ANN) methods were adopted for model-based soil moisture inversion. The training datasets for the NNs were
Stella, João Paulo Fragomeni; Oliveira, Andrea Becker; Nojima, Lincoln Issamu; Marquezan, Mariana
2015-01-01
OBJECTIVE: To assess four different chemical surface conditioning methods for ceramic material before bracket bonding, and their impact on shear bond strength and surface integrity at debonding. METHODS: Four experimental groups (n = 13) were set up according to the ceramic conditioning method: G1 = 37% phosphoric acid etching followed by silane application; G2 = 37% liquid phosphoric acid etching, no rinsing, followed by silane application; G3 = 10% hydrofluoric acid etching alone; and G4 = 10% hydrofluoric acid etching followed by silane application. After surface conditioning, metal brackets were bonded to porcelain by means of the Transbond XP system (3M Unitek). Samples were submitted to shear bond strength tests in a universal testing machine and the surfaces were later assessed with a microscope under 8 X magnification. ANOVA/Tukey tests were performed to establish the difference between groups (α= 5%). RESULTS: The highest shear bond strength values were found in groups G3 and G4 (22.01 ± 2.15 MPa and 22.83 ± 3.32 Mpa, respectively), followed by G1 (16.42 ± 3.61 MPa) and G2 (9.29 ± 1.95 MPa). As regards surface evaluation after bracket debonding, the use of liquid phosphoric acid followed by silane application (G2) produced the least damage to porcelain. When hydrofluoric acid and silane were applied, the risk of ceramic fracture increased. CONCLUSIONS: Acceptable levels of bond strength for clinical use were reached by all methods tested; however, liquid phosphoric acid etching followed by silane application (G2) resulted in the least damage to the ceramic surface. PMID:26352845
Integrated Modeling of Groundwater and Surface Water Interactions in a Manmade Wetland
Directory of Open Access Journals (Sweden)
Guobiao Huang Gour-Tsyh Yeh
2012-01-01
Full Text Available A manmade pilot wetland in south Florida, the Everglades Nutrient Removal (ENR project, was modeled with a physics-based integrated approach using WASH123D (Yeh et al. 2006. Storm water is routed into the treatment wetland for phosphorus removal by plant and sediment uptake. It overlies a highly permeable surficial groundwater aquifer. Strong surface water and groundwater interactions are a key component of the hydrologic processes. The site has extensive field measurement and monitoring tools that provide point scale and distributed data on surface water levels, groundwater levels, and the physical range of hydraulic parameters and hydrologic fluxes. Previous hydrologic and hydrodynamic modeling studies have treated seepage losses empirically by some simple regression equations and, only surface water flows are modeled in detail. Several years of operational data are available and were used in model historical matching and validation. The validity of a diffusion wave approximation for two-dimensional overland flow (in the region with very flat topography was also tested. The uniqueness of this modeling study is notable for (1 the point scale and distributed comparison of model results with observed data; (2 model parameters based on available field test data; and (3 water flows in the study area include two-dimensional overland flow, hydraulic structures/levees, three-dimensional subsurface flow and one-dimensional canal flow and their interactions. This study demonstrates the need and the utility of a physics-based modeling approach for strong surface water and groundwater interactions.
Prolonged silicon carbide integrated circuit operation in Venus surface atmospheric conditions
Directory of Open Access Journals (Sweden)
Philip G. Neudeck
2016-12-01
Full Text Available The prolonged operation of semiconductor integrated circuits (ICs needed for long-duration exploration of the surface of Venus has proven insurmountably challenging to date due to the ∼ 460 °C, ∼ 9.4 MPa caustic environment. Past and planned Venus landers have been limited to a few hours of surface operation, even when IC electronics needed for basic lander operation are protected with heavily cumbersome pressure vessels and cooling measures. Here we demonstrate vastly longer (weeks electrical operation of two silicon carbide (4H-SiC junction field effect transistor (JFET ring oscillator ICs tested with chips directly exposed (no cooling and no protective chip packaging to a high-fidelity physical and chemical reproduction of Venus’ surface atmosphere. This represents more than 100-fold extension of demonstrated Venus environment electronics durability. With further technology maturation, such SiC IC electronics could drastically improve Venus lander designs and mission concepts, fundamentally enabling long-duration enhanced missions to the surface of Venus.
Full Coupling Between the Atmosphere, Surface, and Subsurface for Integrated Hydrologic Simulation
Davison, Jason Hamilton; Hwang, Hyoun-Tae; Sudicky, Edward A.; Mallia, Derek V.; Lin, John C.
2018-01-01
An ever increasing community of earth system modelers is incorporating new physical processes into numerical models. This trend is facilitated by advancements in computational resources, improvements in simulation skill, and the desire to build numerical simulators that represent the water cycle with greater fidelity. In this quest to develop a state-of-the-art water cycle model, we coupled HydroGeoSphere (HGS), a 3-D control-volume finite element surface and variably saturated subsurface flow model that includes evapotranspiration processes, to the Weather Research and Forecasting (WRF) Model, a 3-D finite difference nonhydrostatic mesoscale atmospheric model. The two-way coupled model, referred to as HGS-WRF, exchanges the actual evapotranspiration fluxes and soil saturations calculated by HGS to WRF; conversely, the potential evapotranspiration and precipitation fluxes from WRF are passed to HGS. The flexible HGS-WRF coupling method allows for unique meshes used by each model, while maintaining mass and energy conservation between the domains. Furthermore, the HGS-WRF coupling implements a subtime stepping algorithm to minimize computational expense. As a demonstration of HGS-WRF's capabilities, we applied it to the California Basin and found a strong connection between the depth to the groundwater table and the latent heat fluxes across the land surface.
Directory of Open Access Journals (Sweden)
Chien-Hung Huang
2015-01-01
Full Text Available Many proteins are known to be associated with cancer diseases. It is quite often that their precise functional role in disease pathogenesis remains unclear. A strategy to gain a better understanding of the function of these proteins is to make use of a combination of different aspects of proteomics data types. In this study, we extended Aragues’s method by employing the protein-protein interaction (PPI data, domain-domain interaction (DDI data, weighted domain frequency score (DFS, and cancer linker degree (CLD data to predict cancer proteins. Performances were benchmarked based on three kinds of experiments as follows: (I using individual algorithm, (II combining algorithms, and (III combining the same classification types of algorithms. When compared with Aragues’s method, our proposed methods, that is, machine learning algorithm and voting with the majority, are significantly superior in all seven performance measures. We demonstrated the accuracy of the proposed method on two independent datasets. The best algorithm can achieve a hit ratio of 89.4% and 72.8% for lung cancer dataset and lung cancer microarray study, respectively. It is anticipated that the current research could help understand disease mechanisms and diagnosis.
Gonçalves-Araujo, Rafael; Rabe, Benjamin; Peeken, Ilka; Bracher, Astrid
2018-01-01
As consequences of global warming sea-ice shrinking, permafrost thawing and changes in fresh water and terrestrial material export have already been reported in the Arctic environment. These processes impact light penetration and primary production. To reach a better understanding of the current status and to provide accurate forecasts Arctic biogeochemical and physical parameters need to be extensively monitored. In this sense, bio-optical properties are useful to be measured due to the applicability of optical instrumentation to autonomous platforms, including satellites. This study characterizes the non-water absorbers and their coupling to hydrographic conditions in the poorly sampled surface waters of the central and eastern Arctic Ocean. Over the entire sampled area colored dissolved organic matter (CDOM) dominates the light absorption in surface waters. The distribution of CDOM, phytoplankton and non-algal particles absorption reproduces the hydrographic variability in this region of the Arctic Ocean which suggests a subdivision into five major bio-optical provinces: Laptev Sea Shelf, Laptev Sea, Central Arctic/Transpolar Drift, Beaufort Gyre and Eurasian/Nansen Basin. Evaluating ocean color algorithms commonly applied in the Arctic Ocean shows that global and regionally tuned empirical algorithms provide poor chlorophyll-a (Chl-a) estimates. The semi-analytical algorithms Generalized Inherent Optical Property model (GIOP) and Garver-Siegel-Maritorena (GSM), on the other hand, provide robust estimates of Chl-a and absorption of colored matter. Applying GSM with modifications proposed for the western Arctic Ocean produced reliable information on the absorption by colored matter, and specifically by CDOM. These findings highlight that only semi-analytical ocean color algorithms are able to identify with low uncertainty the distribution of the different optical water constituents in these high CDOM absorbing waters. In addition, a clustering of the Arctic Ocean
Combined algorithms in nonlinear problems of magnetostatics
International Nuclear Information System (INIS)
Gregus, M.; Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.
1988-01-01
To solve boundary problems of magnetostatics in unbounded two- and three-dimensional regions, we construct combined algorithms based on a combination of the method of boundary integral equations with the grid methods. We study the question of substantiation of the combined method of nonlinear magnetostatic problem without the preliminary discretization of equations and give some results on the convergence of iterative processes that arise in non-linear cases. We also discuss economical iterative processes and algorithms that solve boundary integral equations on certain surfaces. Finally, examples of numerical solutions of magnetostatic problems that arose when modelling the fields of electrophysical installations are given too. 14 refs.; 2 figs.; 1 tab
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains two-dimensional precipitation and surface products from the JPSS Microwave Integrated Retrieval System (MIRS) using sensor data from the...
Fu, Youzhi; Gao, Hang; Wang, Xuanping; Guo, Dongming
2017-05-01
The integral impeller and blisk of an aero-engine are high performance parts with complex structure and made of difficult-to-cut materials. The blade surfaces of the integral impeller and blisk are functional surfaces for power transmission, and their surface integrity has significant effects on the aerodynamic efficiency and service life of an aero-engine. Thus, it is indispensable to finish and strengthen the blades before use. This paper presents a comprehensive literature review of studies on finishing and strengthening technologies for the impeller and blisk of aero-engines. The review includes independent and integrated finishing and strengthening technologies and discusses advanced rotational abrasive flow machining with back-pressure used for finishing the integral impeller and blisk. A brief assessment of future research problems and directions is also presented.
HESS Opinions "Integration of groundwater and surface water research: an interdisciplinary problem?"
Barthel, R.
2014-07-01
Today there is a great consensus that water resource research needs to become more holistic, integrating perspectives of a large variety of disciplines. Groundwater and surface water (hereafter: GW and SW) are typically identified as different compartments of the hydrological cycle and were traditionally often studied and managed separately. However, despite this separation, these respective fields of study are usually not considered to be different disciplines. They are often seen as different specializations of hydrology with a different focus yet similar theory, concepts, and methodology. The present article discusses how this notion may form a substantial obstacle in the further integration of GW and SW research and management. The article focuses on the regional scale (areas of approximately 103 to 106 km2), which is identified as the scale where integration is most greatly needed, but ironically where the least amount of fully integrated research seems to be undertaken. The state of research on integrating GW and SW research is briefly reviewed and the most essential differences between GW hydrology (or hydrogeology, geohydrology) and SW hydrology are presented. Groundwater recharge and baseflow are used as examples to illustrate different perspectives on similar phenomena that can cause severe misunderstandings and errors in the conceptualization of integration schemes. The fact that integration of GW and SW research on the regional scale necessarily must move beyond the hydrological aspects, by collaborating with the social sciences and increasing the interaction between science and society in general, is also discussed. The typical elements of an ideal interdisciplinary workflow are presented and their relevance with respect to the integration of GW and SW is discussed. The overall conclusions are that GW hydrology and SW hydrogeology study rather different objects of interest, using different types of observation, working on different problem settings
Directory of Open Access Journals (Sweden)
Mircea FULEA
2009-01-01
Full Text Available In an evolving, highly turbulent and uncertain socio-economic environment, organizations must consider strategies of systematic and continuous integration of innovation within their business systems, as a fundamental condition for sustainable development. Adequate methodologies are required in this respect. A mature framework for integrating innovative problem solving approaches within business process improvement methodologies is proposed in this paper. It considers a TRIZ-centred algorithm in the improvement phase of the DMAIC methodology. The new tool is called enhanced sigma-TRIZ. A case study reveals the practical application of the proposed methodology. The integration of enhanced sigma-TRIZ within a knowledge management software platform (KMSP is further described. Specific developments to support processes of knowledge creation, knowledge storage and retrieval, knowledge transfer and knowledge application in a friendly and effective way within the KMSP are also highlighted.
Zhu, Ying; Herbert, John M.
2018-01-01
The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.
Das, Anshuman; Patel, S. K.; Sateesh Kumar, Ch.; Biswal, B. B.
2018-03-01
The newer technological developments are exerting immense pressure on domain of production. These fabrication industries are busy finding solutions to reduce the costs of cutting materials, enhance the machined parts quality and testing different materials, which can be made versatile for cutting materials, which are difficult for machining. High-speed machining has been the domain of paramount importance for mechanical engineering. In this study, the variation of surface integrity parameters of hardened AISI 4340 alloy steel was analyzed. The surface integrity parameters like surface roughness, micro hardness, machined surface morphology and white layer of hardened AISI 4340 alloy steel were compared using coated and uncoated cermet inserts under dry cutting condition. From the results, it was deduced that coated insert outperformed uncoated one in terms of different surface integrity characteristics.
Directory of Open Access Journals (Sweden)
Taochang Li
2014-01-01
Full Text Available Automatic steering control is the key factor and essential condition in the realization of the automatic navigation control of agricultural vehicles. In order to get satisfactory steering control performance, an adaptive sliding mode control method based on a nonlinear integral sliding surface is proposed in this paper for agricultural vehicle steering control. First, the vehicle steering system is modeled as a second-order mathematic model; the system uncertainties and unmodeled dynamics as well as the external disturbances are regarded as the equivalent disturbances satisfying a certain boundary. Second, a transient process of the desired system response is constructed in each navigation control period. Based on the transient process, a nonlinear integral sliding surface is designed. Then the corresponding sliding mode control law is proposed to guarantee the fast response characteristics with no overshoot in the closed-loop steering control system. Meanwhile, the switching gain of sliding mode control is adaptively adjusted to alleviate the control input chattering by using the fuzzy control method. Finally, the effectiveness and the superiority of the proposed method are verified by a series of simulation and actual steering control experiments.
In traditional watershed delineation and topographic modeling, surface depressions are generally treated as spurious features and simply removed from a digital elevation model (DEM) to enforce flow continuity of water across the topographic surface to the watershed outlets. In re...
National Research Council Canada - National Science Library
Ching, H. K; Liu, C. T; Yen, S. C
2004-01-01
.... For the linear analysis, material compressibility was modeled with Poisson's varying form 0.48 to 0.4999. In addition, with the presence of the crack surface pressure, the J-integral was modified by including an additional line integral...
Azarova, VV; Dmitriev, VG; Lokhov, YN; Malitskii, KN
The differential and integral light scattering by dielectric surfaces is studied theoretically taking a thin nearsurface defect layer into account. The expressions for the intensities of differential and total integral scattering are found by the Green function method. Conditions are found under
Machine integrated optical measurement of honed surfaces in presence of cooling lubricant
International Nuclear Information System (INIS)
Schmitt, R; Koenig, N; Zheng, H
2011-01-01
The measurement of honed surfaces is one of the most important tasks in tribology. Although many established techniques exist for texture characterization, such as SEM, tactile stylus or white-light interferometry, none of them is suited for a machine integrated measurement. Harsh conditions such as the presence of cooling lubricant or vibrations prohibit the use of commercial sensors inside a honing machine. Instead, machined engine blocks need time-consuming cleaning and preparation while taken out of the production line for inspection. A full inspection of all produced parts is hardly possible this way. Within this paper, an approach for a machine-integrated measurement is presented, which makes use of optical sensors for texture profiling. The cooling lubricant here serves as immersion medium. The results of test measurements with a chromatic-confocal sensor and a fiber-optical low-coherence interferometer show the potential of both measuring principles for our approach. Cooling lubricant temperature and flow, scanning speed and measurement frequency have been varied in the tests. The sensor with best performance will later be chosen for machine integration.
Scheliga, Bernhard; Tetzlaff, Doerthe; Nuetzmann, Gunnar; Soulsby, Chris
2016-04-01
Groundwater-surface water dynamics play an important role in runoff generation and the hydrologic connectivity between hillslopes and streams. Here, we present findings from a suite of integrated, empirical approaches to increase our understanding of groundwater-surface water interlinkages in a 3.2 km ^ 2 experimental catchment in the Scottish Highlands. The montane catchment is mainly underlain by granite and has extensive (70%) cover of glacial drift deposits which are up to 40 m deep and form the main aquifer in the catchment. Flat valley bottom areas fringe the stream channel and are characterised by peaty soils (0.5-4 m deep) which cover about 10% of the catchment and receive drainage from upslope areas. The transition between the hillslopes and riparian zone forms a critical interface for groundwater-surface water interactions that controls both the dynamics of riparian saturation and stream flow generation. We nested observations using wells to assess the groundwater - surface water transition, LiDAR surveys to explore the influence of micro-topography on shallow groundwater efflux and riparian wells to examine the magnitude and flux rates of deeper groundwater sources. We also used electrical resistivity surveys to assess the architecture and storage properties of drift aquifers. Finally, we used isotopic tracers to differentiate recharge sources and associated residence times as well as quantifying how groundwater dynamics affect stream flow. These new data have provided a novel conceptual framework for local groundwater - surface water exchange that is informing the development of new deterministic models for the site.
Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; vanÂ derÂ Hilst, Robert D.
2016-05-01
We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.
Aponso, Bimal; Coppenbarger, Richard A.; Jung, Yoon; Quon, Leighton; Lohr, Gary; O’Connor, Neil; Engelland, Shawn
2015-01-01
NASA's Aeronautics Research Mission Directorate (ARMD) collaborates with the FAA and industry to provide concepts and technologies that enhance the transition to the next-generation air-traffic management system (NextGen). To facilitate this collaboration, ARMD has a series of Airspace Technology Demonstration (ATD) sub-projects that develop, demonstrate, and transitions NASA technologies and concepts for implementation in the National Airspace System (NAS). The second of these sub-projects, ATD-2, is focused on the potential benefits to NAS stakeholders of integrated arrival, departure, surface (IADS) operations. To determine the project objectives and assess the benefits of a potential solution, NASA surveyed NAS stakeholders to understand the existing issues in arrival, departure, and surface operations, and the perceived benefits of better integrating these operations. NASA surveyed a broad cross-section of stakeholders representing the airlines, airports, air-navigation service providers, and industry providers of NAS tools. The survey indicated that improving the predictability of flight times (schedules) could improve efficiency in arrival, departure, and surface operations. Stakeholders also mentioned the need for better strategic and tactical information on traffic constraints as well as better information sharing and a coupled collaborative planning process that allows stakeholders to coordinate IADS operations. To assess the impact of a potential solution, NASA sketched an initial departure scheduling concept and assessed its viability by surveying a select group of stakeholders for a second time. The objective of the departure scheduler was to enable flights to move continuously from gate to cruise with minimal interruption in a busy metroplex airspace environment using strategic and tactical scheduling enhanced by collaborative planning between airlines and service providers. The stakeholders agreed that this departure concept could improve schedule
Directory of Open Access Journals (Sweden)
A. J. Hind
2011-01-01
Full Text Available Dimethyl sulphide (DMS is an important precursor of cloud condensation nuclei (CCN, particularly in the remote marine atmosphere. The SE Pacific is consistently covered with a persistent stratocumulus layer that increases the albedo over this large area. It is not certain whether the source of CCN to these clouds is natural and oceanic or anthropogenic and terrestrial. This unknown currently limits our ability to reliably model either the cloud behaviour or the oceanic heat budget of the region. In order to better constrain the marine source of CCN, it is necessary to have an improved understanding of the sea-air flux of DMS. Of the factors that govern the magnitude of this flux, the greatest unknown is the surface seawater DMS concentration. In the study area, there is a paucity of such data, although previous measurements suggest that the concentration can be substantially variable. In order to overcome such data scarcity, a number of climatologies and algorithms have been devised in the last decade to predict seawater DMS. Here we test some of these in the SE Pacific by comparing predictions with measurements of surface seawater made during the Vamos Ocean-Cloud-Atmosphere-Land Study Regional Experiment (VOCALS-REx in October and November of 2008. We conclude that none of the existing algorithms reproduce local variability in seawater DMS in this region very well. From these findings, we recommend the best algorithm choice for the SE Pacific and suggest lines of investigation for future work.
Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme
Huang, Melin; Huang, Bormin; Huang, Allen H.
2015-10-01
The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.
NASA Research on an Integrated Concept for Airport Surface Operations Management
Gupta, Gautam
2012-01-01
Surface operations at airports in the US are based on tactical operations, where departure aircraft primarily queue up and wait at the departure runways. There have been attempts to address the resulting inefficiencies with both strategic and tactical tools for metering departure aircraft. This presentation gives an overview of Spot And Runway Departure Advisor with Collaborative Decision Making (SARDA-CDM): an integrated strategic and tactical system for improving surface operations by metering departure aircraft. SARDA-CDM is the augmentation of ground and local controller advisories through sharing of flight movement and related operations information between airport operators, flight operators and air traffic control at the airport. The goal is to enhance the efficiency of airport surface operations by exchanging information between air traffic control and airline operators, while minimizing adverse effects on stakeholders and passengers. The presentation motivates the need for departure metering, and provides a brief background on the previous work on SARDA. Then, the concept of operations for SARDA-CDM is described. Then the preliminary results from testing the concept in a real-time automated simulation environment are described. Results indicate benefits such as reduction in taxiing delay and fuel consumption. Further, the preliminary implementation of SARDA-CDM seems robust for two minutes delay in gate push-back times.
International Nuclear Information System (INIS)
Kumar, P.; Martin, H.; Jiang, X.
2016-01-01
Non-destructive testing and online measurement of surface features are pressing demands in manufacturing. Thus optical techniques are gaining importance for characterization of complex engineering surfaces. Harnessing integrated optics for miniaturization of interferometry systems onto a silicon wafer and incorporating a compact optical probe would enable the development of a handheld sensor for embedded metrology applications. In this work, we present the progress in the development of a hybrid photonics based metrology sensor device for online surface profile measurements. The measurement principle along with test and measurement results of individual components has been presented. For non-contact measurement, a spectrally encoded lateral scanning probe based on the laser scanning microscopy has been developed to provide fast measurement with lateral resolution limited to the diffraction limit. The probe demonstrates a lateral resolution of ∼3.6 μm while high axial resolution (sub-nanometre) is inherently achieved by interferometry. Further the performance of the hybrid tuneable laser and the scanning probe was evaluated by measuring a standard step height sample of 100 nm.
Energy Technology Data Exchange (ETDEWEB)
Kumar, P.; Martin, H.; Jiang, X. [EPSRC Centre for Innovative Manufacturing in Advanced Metrology, University of Huddersfield, Huddersfield HD1 3DH (United Kingdom)
2016-06-15
Non-destructive testing and online measurement of surface features are pressing demands in manufacturing. Thus optical techniques are gaining importance for characterization of complex engineering surfaces. Harnessing integrated optics for miniaturization of interferometry systems onto a silicon wafer and incorporating a compact optical probe would enable the development of a handheld sensor for embedded metrology applications. In this work, we present the progress in the development of a hybrid photonics based metrology sensor device for online surface profile measurements. The measurement principle along with test and measurement results of individual components has been presented. For non-contact measurement, a spectrally encoded lateral scanning probe based on the laser scanning microscopy has been developed to provide fast measurement with lateral resolution limited to the diffraction limit. The probe demonstrates a lateral resolution of ∼3.6 μm while high axial resolution (sub-nanometre) is inherently achieved by interferometry. Further the performance of the hybrid tuneable laser and the scanning probe was evaluated by measuring a standard step height sample of 100 nm.
Ajami, Hoori; McCabe, Matthew; Evans, Jason P.; Stisen, Simon
2014-01-01
is to minimize the impact of initialization while using the smallest spin-up time possible. In this study, multicriteria analysis was performed to assess the spin-up behavior of the ParFlow.CLM integrated groundwater-surface water-land surface model over a 208 km
Directory of Open Access Journals (Sweden)
Popov V. M.
2011-12-01
Full Text Available Method for visualization of integrated circuit (IC surface temperature by means of the liquid crystal film deposited from solution on its surface is proposed. The boundaries of local regions represent isotherms with corresponding phase transitions. On the base of isotherms positions and consumed by IC power thermal resistances between crystal and environment are determined.
Anti Rohumaa; Toni Antikainen; Christopher G. Hunt; Charles R. Frihart; Mark Hughes
2016-01-01
Wood material surface properties play an important role in adhesive bond formation and performance. In the present study, a test method was developed to evaluate the integrity of the wood surface, and the results were used to understand bond performance. Materials used were rotary cut birch (Betula pendula Roth) veneers, produced from logs soaked at 20 or 70 Â°C prior...
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.
Djurabekova, Flyura; Pohjonen, Aarne; Nordlund, Kai
2011-01-01
The effect of electric fields on metal surfaces is fairly well studied, resulting in numerous analytical models developed to understand the mechanisms of ionization of surface atoms observed at very high electric fields, as well as the general behavior of a metal surface in this condition. However, the derivation of analytical models does not include explicitly the structural properties of metals, missing the link between the instantaneous effects owing to the applied field and the consequent response observed in the metal surface as a result of an extended application of an electric field. In the present work, we have developed a concurrent electrodynamic–molecular dynamic model for the dynamical simulation of an electric-field effect and subsequent modification of a metal surface in the framework of an atomistic molecular dynamics (MD) approach. The partial charge induced on the surface atoms by the electric field is assessed by applying the classical Gauss law. The electric forces acting on the partially...
Directory of Open Access Journals (Sweden)
W. C. Liu
2018-04-01
Full Text Available High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo. Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this
DEFF Research Database (Denmark)
Belluco, Walter; De Chiffre, Leonardo
2002-01-01
This paper presents an investigation on the effect of new formulations of vegetable oils on surface integrity and part accuracy in reaming and tapping operations with AISI 316L stainless steel. Surface integrity was assessed with measurements of roughness, microhardness, and using metallographic...... as part accuracy. Cutting fluids based on vegetable oils showed comparable or better performance than mineral oils. ÆÉ2002 Published by Elsevier Science Ltd....... techniques, while part accuracy was measured on a coordinate measuring machine. A widely diffused commercial mineral oil was used as reference for all measurements. Cutting fluid was found to have a significant effect on surface integrity and thickness of the strain hardened layer in the sub-surface, as well...
International Nuclear Information System (INIS)
Kim, Jae Eum
2014-01-01
DC electrical outputs of a piezoelectric vibration energy harvester by nonlinear rectifying circuitry can hardly be obtained either by any mathematical models developed so far or by finite element analysis. To address the issue, this work used an equivalent electrical circuit model and newly developed an algorithm to efficiently identify relevant circuit parameters of arbitrarily-shaped cantilevered piezoelectric energy harvesters. The developed algorithm was then realized as a dedicated software module by adopting ANSYS finite element analysis software for the parameters identification and the Tcl/Tk programming language for a graphical user interface and linkage with ANSYS. For verifications, various AC electrical outputs by the developed software were compared with those by traditional finite element analysis. DC electrical outputs through rectifying circuitry were also examined for varying values of the smoothing capacitance and load resistance.
Energy Technology Data Exchange (ETDEWEB)
Kim, Jae Eum [Catholic University of Daegu, Gyeongsan (Korea, Republic of)
2014-10-15
DC electrical outputs of a piezoelectric vibration energy harvester by nonlinear rectifying circuitry can hardly be obtained either by any mathematical models developed so far or by finite element analysis. To address the issue, this work used an equivalent electrical circuit model and newly developed an algorithm to efficiently identify relevant circuit parameters of arbitrarily-shaped cantilevered piezoelectric energy harvesters. The developed algorithm was then realized as a dedicated software module by adopting ANSYS finite element analysis software for the parameters identification and the Tcl/Tk programming language for a graphical user interface and linkage with ANSYS. For verifications, various AC electrical outputs by the developed software were compared with those by traditional finite element analysis. DC electrical outputs through rectifying circuitry were also examined for varying values of the smoothing capacitance and load resistance.
Directory of Open Access Journals (Sweden)
Syuan-Yi Chen
2016-01-01
Full Text Available This study developed an integrated energy management/gear-shifting strategy by using a bacterial foraging algorithm (BFA in an engine/motor hybrid powertrain with electric continuously variable transmission. A control-oriented vehicle model was constructed on the Matlab/Simulink platform for further integration with developed control strategies. A baseline control strategy with four modes was developed for comparison with the proposed BFA. The BFA was used with five bacterial populations to search for the optimal gear ratio and power-split ratio for minimizing the cost: the equivalent fuel consumption. Three main procedures were followed: chemotaxis, reproduction, and elimination-dispersal. After the vehicle model was integrated with the vehicle control unit with the BFA, two driving patterns, the New European Driving Cycle and the Federal Test Procedure, were used to evaluate the energy consumption improvement and equivalent fuel consumption compared with the baseline. The results show that [18.35%,21.77%] and [8.76%,13.81%] were improved for the optimal energy management and integrated optimization at the first and second driving cycles, respectively. Real-time platform designs and vehicle integration for a dynamometer test will be investigated in the future.
Xie, Yanan; Zhou, Mingliang; Pan, Dengke
2017-10-01
The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.
Crookshank, Meghan C; Beek, Maarten; Singh, Devin; Schemitsch, Emil H; Whyne, Cari M
2013-07-01
Accurate alignment of femoral shaft fractures treated with intramedullary nailing remains a challenge for orthopaedic surgeons. The aim of this study is to develop and validate a cone-beam CT-based, semi-automated algorithm to quantify the malalignment in six degrees of freedom (6DOF) using a surface matching and principal axes-based approach. Complex comminuted diaphyseal fractures were created in nine cadaveric femora and cone-beam CT images were acquired (27 cases total). Scans were cropped and segmented using intensity-based thresholding, producing superior, inferior and comminution volumes. Cylinders were fit to estimate the long axes of the superior and inferior fragments. The angle and distance between the two cylindrical axes were calculated to determine flexion/extension and varus/valgus angulation and medial/lateral and anterior/posterior translations, respectively. Both surfaces were unwrapped about the cylindrical axes. Three methods of matching the unwrapped surface for determination of periaxial rotation were compared based on minimizing the distance between features. The calculated corrections were compared to the input malalignment conditions. All 6DOF were calculated to within current clinical tolerances for all but two cases. This algorithm yielded accurate quantification of malalignment of femoral shaft fractures for fracture gaps up to 60 mm, based on a single CBCT image of the fractured limb. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Integrated-Optics Components Utilizing Long-Range Surface Plasmon Polaritons
DEFF Research Database (Denmark)
Boltasseva, Alexandra
2004-01-01
This thesis describes a new class of components for integrated optics, based on the propagation of long-range surface plasmon polaritons (LR-SPPs) along metal stripes embedded in a dielectric. These novel components can provide guiding of light as well as coupling and splitting from/into a number...... with experimental results is obtained. The interaction of LR-SPPs with photonic crystals (PCs) is also studied. The PC structures are formed by periodic arrays of gold bumps that are arranged in a triangular lattice and placed symmetrically on both sides of a thin gold film. The LR-SPP transmission through...... of channels with good performance. Guiding of LR-SPPs along nm-thin and µm-wide gold stripes embedded in polymer is investigated in the wavelength range of 1250 – 1650 nm. LR-SPP guiding properties, such as the propagation loss and mode field diameter, are studied for different stripe widths and thicknesses...
Integration of thin film giant magnetoimpedance sensor and surface acoustic wave transponder
Li, Bodong
2012-03-09
Passive and remote sensing technology has many potential applications in implantable devices, automation, or structural monitoring. In this paper, a tri-layer thin film giant magnetoimpedance (GMI) sensor with the maximum sensitivity of 16%/Oe and GMI ratio of 44% was combined with a two-port surface acoustic wave(SAW) transponder on a common substrate using standard microfabrication technology resulting in a fully integrated sensor for passive and remote operation. The implementation of the two devices has been optimized by on-chip matching circuits. The measurement results clearly show a magnetic field response at the input port of the SAW transponder that reflects the impedance change of the GMI sensor.
Integration of thin film giant magnetoimpedance sensor and surface acoustic wave transponder
Li, Bodong; Salem, Nedime Pelin M. H.; Giouroudi, Ioanna; Kosel, Jü rgen
2012-01-01
Passive and remote sensing technology has many potential applications in implantable devices, automation, or structural monitoring. In this paper, a tri-layer thin film giant magnetoimpedance (GMI) sensor with the maximum sensitivity of 16%/Oe and GMI ratio of 44% was combined with a two-port surface acoustic wave(SAW) transponder on a common substrate using standard microfabrication technology resulting in a fully integrated sensor for passive and remote operation. The implementation of the two devices has been optimized by on-chip matching circuits. The measurement results clearly show a magnetic field response at the input port of the SAW transponder that reflects the impedance change of the GMI sensor.
Forecasting in an integrated surface water-ground water system: The Big Cypress Basin, South Florida
Butts, M. B.; Feng, K.; Klinting, A.; Stewart, K.; Nath, A.; Manning, P.; Hazlett, T.; Jacobsen, T.
2009-04-01
The South Florida Water Management District (SFWMD) manages and protects the state's water resources on behalf of 7.5 million South Floridians and is the lead agency in restoring America's Everglades - the largest environmental restoration project in US history. Many of the projects to restore and protect the Everglades ecosystem are part of the Comprehensive Everglades Restoration Plan (CERP). The region has a unique hydrological regime, with close connection between surface water and groundwater, and a complex managed drainage network with many structures. Added to the physical complexity are the conflicting needs of the ecosystem for protection and restoration, versus the substantial urban development with the accompanying water supply, water quality and flood control issues. In this paper a novel forecasting and real-time modelling system is presented for the Big Cypress Basin. The Big Cypress Basin includes 272 km of primary canals and 46 water control structures throughout the area that provide limited levels of flood protection, as well as water supply and environmental quality management. This system is linked to the South Florida Water Management District's extensive real-time (SCADA) data monitoring and collection system. Novel aspects of this system include the use of a fully distributed and integrated modeling approach and a new filter-based updating approach for accurately forecasting river levels. Because of the interaction between surface- and groundwater a fully integrated forecast modeling approach is required. Indeed, results for the Tropical Storm Fay in 2008, the groundwater levels show an extremely rapid response to heavy rainfall. Analysis of this storm also shows that updating levels in the river system can have a direct impact on groundwater levels.
Hydrology of prairie wetlands: Understanding the integrated surface-water and groundwater processes
Hayashi, Masaki; van der Kamp, Garth; Rosenberry, Donald O.
2016-01-01
Wetland managers and policy makers need to make decisions based on a sound scientific understanding of hydrological and ecological functions of wetlands. This article presents an overview of the hydrology of prairie wetlands intended for managers, policy makers, and researchers new to this field (e.g., graduate students), and a quantitative conceptual framework for understanding the hydrological functions of prairie wetlands and their responses to changes in climate and land use. The existence of prairie wetlands in the semi-arid environment of the Prairie-Pothole Region (PPR) depends on the lateral inputs of runoff water from their catchments because mean annual potential evaporation exceeds precipitation in the PPR. Therefore, it is critically important to consider wetlands and catchments as highly integrated hydrological units. The water balance of individual wetlands is strongly influenced by runoff from the catchment and the exchange of groundwater between the central pond and its moist margin. Land-use practices in the catchment have a sensitive effect on runoff and hence the water balance. Surface and subsurface storage and connectivity among individual wetlands controls the diversity of pond permanence within a wetland complex, resulting in a variety of eco-hydrological functionalities necessary for maintaining the integrity of prairie-wetland ecosystems.
Novel in situ mechanical testers to enable integrated metal surface micro-machines.
Energy Technology Data Exchange (ETDEWEB)
Follstaedt, David Martin; de Boer, Maarten Pieter; Kotula, Paul Gabriel; Hearne, Sean Joseph; Foiles, Stephen Martin; Buchheit, Thomas Edward; Dyck, Christopher William
2005-10-01
The ability to integrate metal and semiconductor micro-systems to perform highly complex functions, such as RF-MEMS, will depend on developing freestanding metal structures that offer improved conductivity, reflectivity, and mechanical properties. Three issues have prevented the proliferation of these systems: (1) warpage of active components due to through-thickness stress gradients, (2) limited component lifetimes due to fatigue, and (3) low yield strength. To address these issues, we focus on developing and implementing techniques to enable the direct study of the stress and microstructural evolution during electrodeposition and mechanical loading. The study of stress during electrodeposition of metal thin films is being accomplished by integrating a multi-beam optical stress sensor into an electrodeposition chamber. By coupling the in-situ stress information with ex-situ microstructural analysis, a scientific understanding of the sources of stress during electrodeposition will be obtained. These results are providing a foundation upon which to develop a stress-gradient-free thin film directly applicable to the production of freestanding metal structures. The issues of fatigue and yield strength are being addressed by developing novel surface micromachined tensile and bend testers, by interferometry, and by TEM analysis. The MEMS tensile tester has a ''Bosch'' etched hole to allow for direct viewing of the microstructure in a TEM before, during, and after loading. This approach allows for the quantitative measurements of stress-strain relations while imaging dislocation motion, and determination of fracture nucleation in samples with well-known fatigue/strain histories. This technique facilitates the determination of the limits for classical deformation mechanisms and helps to formulate a new understanding of the mechanical response as the grain sizes are refined to a nanometer scale. Together, these studies will result in a science
Othman, Arsalan A.; Gloaguen, Richard
2017-09-01
Lithological mapping in mountainous regions is often impeded by limited accessibility due to relief. This study aims to evaluate (1) the performance of different supervised classification approaches using remote sensing data and (2) the use of additional information such as geomorphology. We exemplify the methodology in the Bardi-Zard area in NE Iraq, a part of the Zagros Fold - Thrust Belt, known for its chromite deposits. We highlighted the improvement of remote sensing geological classification by integrating geomorphic features and spatial information in the classification scheme. We performed a Maximum Likelihood (ML) classification method besides two Machine Learning Algorithms (MLA): Support Vector Machine (SVM) and Random Forest (RF) to allow the joint use of geomorphic features, Band Ratio (BR), Principal Component Analysis (PCA), spatial information (spatial coordinates) and multispectral data of the Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite. The RF algorithm showed reliable results and discriminated serpentinite, talus and terrace deposits, red argillites with conglomerates and limestone, limy conglomerates and limestone conglomerates, tuffites interbedded with basic lavas, limestone and Metamorphosed limestone and reddish green shales. The best overall accuracy (∼80%) was achieved by Random Forest (RF) algorithms in the majority of the sixteen tested combination datasets.