WorldWideScience

Sample records for grobner bases method

  1. Computation of Difference Grobner Bases

    Directory of Open Access Journals (Sweden)

    Vladimir P. Gerdt

    2012-07-01

    Full Text Available This paper is an updated and extended version of our note \\cite{GR'06} (cf.\\ also \\cite{GR-ACAT}. To compute difference \\Gr bases of ideals generated by linear polynomials we adopt to difference polynomial rings the involutive algorithm based on Janet-like division. The algorithm has been implemented in Maple in the form of the package LDA (Linear Difference Algebra and we describe the main features of the package. Its applications are illustrated by generation of finite difference approximations to linear partial differential equations and by reduction of Feynman integrals. We also present the algorithm for an ideal generated by a finite set of nonlinear difference polynomials. If the algorithm terminates, then it constructs a \\Gr basis of the ideal.

  2. Partial Gr\\"obner bases for multiobjective integer programming

    CERN Document Server

    Blanco, Victor

    2007-01-01

    In this paper we present two new methodologies for solving multiobjective integer programming using tools from algebraic geometry. We introduce the concept of partial Gr\\"obner basis for a family of multiobjective programs where the right-hand side varies. This new structure extends the notion of usual Gr\\"obner basis for the single objective case, to the case of multiple objectives, i.e., a partial ordering instead of a total ordering over the feasible vectors. The main property of these bases is that partial reduction of the integer elements in the kernel of the constraint matrix by the different blocks of the basis is zero. It allows us to prove that this new construction is a test family for a family of multiobjective programs. An algorithm '\\`a la Buchberger' is developed to compute partial Gr\\"obner basis. Specifically, with this tool we compute the entire set of efficient solutions of any multiobjective integer linear problem (MOILP). Some examples illustrate the application of the algorithms and compu...

  3. A note on Computing SAGBI-Grobner bases in a Polynomial Ring over a Field

    Directory of Open Access Journals (Sweden)

    Hans Ofverbeck

    2006-01-01

    Full Text Available In the paper [2] Miller has made concrete Sweedler's theory for ideal bases in commutative valuation rings (see [5] to the case of subalgebras of a polynomial ring over a field, the ideal bases are called SAGBI-Grobner bases in this case. Miller proves a concrete algorithm to construct and verify a SAGBI-Grobner basis, given a set of generators for an ideal in the subalgebra. The purpose of this note is to present an observation which justifies substantial shrinking of the so called syzygy family of a pair of polynomials. Fewer elements in the syzygy family means that fewer syzygy-polynomials need to be checked in the SAGBI-Grobner basis construction/verification algorithm, thus decreasing the time needed for computation.

  4. Gr\\"obner bases of simplicial toric ideals

    CERN Document Server

    Hellus, M; Hoa, L T

    2009-01-01

    Bounds for the maximum degree of a minimal Gr\\"obner basis of simplicial toric ideals with respect to the reverse lexicographic order are given. These bounds are close to the bound stated in Eisenbud-Goto's Conjecture on the Castelnuovo-Mumford regularity.

  5. Minimal Gr\\"obner bases and the predictable leading monomial property

    CERN Document Server

    Kuijper, M

    2009-01-01

    In this paper we focus on Gr\\"obner bases over rings for the univariate case. We identify a useful property of minimal Gr\\"obner bases, that we call the "predictable leading monomial (PLM) property". The property is stronger than "row reducedness" and is crucial in a range of applications. The first part of the paper is tutorial in outlining how the PLM property enables straightforward solutions to classical realization problems of linear systems over fields. In the second part of the paper we use the ideas of [Kuijper,Pinto,Polderman,2007] on polynomial matrices over the finite ring Z_p^r (with p a prime integer and r a positive integer) in the more general setting of Gr\\"obner bases and introduce the notion of "Gr\\"obner p-basis" to achieve a predictable leading monomial property over Z_p^r. This theory finds applications in error control coding over Z_p^r. Through this approach we are extending the ideas of [Kuijper,Pinto,Polderman,2007] to a more general context where the user chooses an ordering of polyn...

  6. Solving Thousand Digit Frobenius Problems Using Grobner Bases

    DEFF Research Database (Denmark)

    Roune, Bjarke Hammersholt

    2008-01-01

    A Gröbner basis-based algorithm for solving the Frobenius Instance Problem is presented, and this leads to an algorithm for solving the Frobenius Problem that can handle numbers with thousands of digits. Connections to irreducible decompositions and Hilbert functions are also presented.......A Gröbner basis-based algorithm for solving the Frobenius Instance Problem is presented, and this leads to an algorithm for solving the Frobenius Problem that can handle numbers with thousands of digits. Connections to irreducible decompositions and Hilbert functions are also presented....

  7. Characteristic Modules of Dual Extensions and Gr(o)bner Bases

    Institute of Scientific and Technical Information of China (English)

    Yun Ge XU; Long Cai LI

    2004-01-01

    Let C be a finite dimensional directed algebra over an algebraically closed field k and A = A(C) the dual extension of C. The characteristic modules of A are constructed explicitly for a class of directed algebras, which generalizes the results of Xi. Furthermore, it is shown that the characteristic modules of dual extensions of a certain class of directed algebras admit the left Grobner basis theory in the sense of E. L. Green.

  8. Fraction-free algorithm for the computation of diagonal forms matrices over Ore domains using Gr{\\"o}bner bases

    CERN Document Server

    Levandovskyy, Viktor

    2011-01-01

    This paper is a sequel to "Computing diagonal form and Jacobson normal form of a matrix using Groebner bases", J. of Symb. Computation, 46 (5), 2011. We present a new fraction-free algorithm for the computation of a diagonal form of a matrix over a certain non-commutative Euclidean domain over a computable field with the help of Gr\\"obner bases. This algorithm is formulated in a general constructive framework of non-commutative Ore localizations of $G$-algebras (OLGAs). We split the computation of a normal form of a matrix into the diagonalization and the normalization processes. Both of them can be made fraction-free. For a matrix $M$ over an OLGA we provide a diagonalization algorithm to compute $U,V$ and $D$ with fraction-free entries such that $UMV=D$ holds and $D$ is diagonal. The fraction-free approach gives us more information on the system of linear functional equations and its solutions, than the classical setup of an operator algebra with rational functions coefficients. In particular, one can handl...

  9. Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes

    CERN Document Server

    Van, Vo Tam; Mita, Seiichi

    2008-01-01

    Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low density parity check (LDPC) codes, and Hermitian codes. Although it is known that the systematic encoding of GQC codes is equivalent to the division algorithm in the theory of Grobner basis of modules, there has been no algorithm that computes Grobner basis for all types of GQC codes. In this paper, we propose two algorithms to compute Grobner basis for GQC codes from their parity check matrices: echelon canonical form algorithm and transpose algorithm. Both algorithms require sufficiently small number of finite-field operations with the order of the third power of code-length. Each algorithm has its own characteristic; the first algorithm is composed of elementary methods, and the second algorithm is based on a novel formula and is faster than the first one for high-rate codes. Moreover, we show that a serial-in serial-out encoder architecture for FG LDPC cod...

  10. Connecting Gr\\"obner Bases Programs with Coq to do Proofs in Algebra, Geometry and Arithmetics

    CERN Document Server

    Pottier, Loïc

    2010-01-01

    We describe how we connected three programs that compute Groebner bases to Coq, to do automated proofs on algebraic, geometrical and arithmetical expressions. The result is a set of Coq tactics and a certificate mechanism (downloadable at http://www-sop.inria.fr/marelle/Loic.Pottier/gb-keappa.tgz). The programs are: F4, GB \\, and gbcoq. F4 and GB are the fastest (up to our knowledge) available programs that compute Groebner bases. Gbcoq is slow in general but is proved to be correct (in Coq), and we adapted it to our specific problem to be efficient. The automated proofs concern equalities and non-equalities on polynomials with coefficients and indeterminates in R or Z, and are done by reducing to Groebner computation, via Hilbert's Nullstellensatz. We adapted also the results of Harrison, to allow to prove some theorems about modular arithmetics. The connection between Coq and the programs that compute Groebner bases is done using the "external" tactic of Coq that allows to call arbitrary programs accepting ...

  11. Fast computation of Gr(o)bner basis of homogenous ideals of IF[x, y

    Institute of Scientific and Technical Information of China (English)

    LU PeiZhong; ZOU Yan

    2008-01-01

    This paper provides a fast algorithm for Grobner bases of homogenous ideals of F[x, y] over a finite field IF. We show that only the S-polynomials of neighbor pairs of a strictly ordered finite homogenours generating set are needed in the computing of a Grobner base of the homogenous ideal. It reduces dramatically the number of un-necessary S-polynomials that are processed. We also show that the computational complexity of our new algorithm is O(N2), where N is the maximum degree of the input generating polynomials. The new algorithm can be used to solve a problem of blind recognition of convolutional codes. This problem is a new generalization of the important problem of synthesis of a linear recurring sequence.

  12. A NEW METHOD FOR THE CONSTRUCTION OF MULTIVARIATE MINIMAL INTERPOLATION POLYNOMIAL

    Institute of Scientific and Technical Information of China (English)

    Zhang Chuanlin

    2001-01-01

    The extended Hermite interpolation problem on segment points set over n-dimensional Euclidean space is cansidered. Based on the algorithm to com pute the Grobner basis of Ideal given by dual basis a new method to construct minimal multivariate polynomial which satis fies the interpolation conditions is given.

  13. Closed form solution for a double quantum well using Gr\\"obner basis

    CERN Document Server

    Acus, A

    2011-01-01

    Analytical expressions for spectrum, eigenfunctions and dipole matrix elements of a square double quantum well (DQW) are presented for a general case when the potential in different regions of the DQW has different heights and effective masses are different. This was achieved by Gr\\"obner basis algorithm which allows to disentangle the resulting coupled polynomials without explicitly solving the transcendental eigenvalue equation.

  14. Lifting algorithms for Gr(o)bner basis computation of invariant ideals%不变理想的Gr(o)bner基提升算法

    Institute of Scientific and Technical Information of China (English)

    吴杰; 陈玉福

    2009-01-01

    采用Gr(o)bner基方法,可以把一个在有限群作用下不变的多项式写成不变环的生成元的多项式.核心问题是如何有效地计算这个正维不变理想的Gr(o)bner基.本文引入一个有效提升算法来计算这组Gr(o)bner基.当用straight line program模型对整个计算过程进行复杂度分析时,可以把计算开销控制在多项式时间内.%A polynomial invariant under the action of a finite group can be rewritten into generators of the invariant ring by Gr(o)bner basis method. The key question is how to find an efficient way to compute the Grobner basis of the invariant ideal which is positive dimensional. We introduce a lifting algorithm for this computation process. If we use straight line program to analyze the complexity result, this process can be done within polynomial time.

  15. Grobner Bases for Nonlinear DAE Systems of Analog Circuits

    Directory of Open Access Journals (Sweden)

    Silke J. Spang

    2008-04-01

    Full Text Available Systems of differential equations play an important role in modelling and analysis of many complex systems e.g. in electronics and mechanics. The following article is concerned with a symbolic analysis approach for reduction of the differential index of nonlinear differential algebraic equation (DAE systems, which occur in the modelling and simulation of analog circuits.

  16. Algorithms for the nonclassical method of symmetry reductions

    CERN Document Server

    Clarkson, P A; Peter A Clarkson; Elizabeth L Mansfield

    1994-01-01

    In this article we present first an algorithm for calculating the determining equations associated with so-called "nonclassical method" of symmetry reductions (a la Bluman and Cole) for systems of partial differentail equations. This algorithm requires significantly less computation time than that standardly used, and avoids many of the difficulties commonly encountered. The proof of correctness of the algorithm is a simple application of the theory of Grobner bases. In the second part we demonstrate some algorithms which may be used to analyse, and often to solve, the resulting systems of overdetermined nonlinear PDEs. We take as our principal example a generalised Boussinesq equation, which arises in shallow water theory. Although the equation appears to be non-integrable, we obtain an exact "two-soliton" solution from a nonclassical reduction.

  17. Noncommutative Grobner basis, Hilbert series, Anick's resolution and BERGMAN under MS-DOS

    OpenAIRE

    1995-01-01

    The definition and main results connected with Grцbner basis, Hilbert series and Anick's resolution are formulated. The method of the infinity behavior prediction of Grцbner basis in noncommutative case is presented. The extensions of BERGMAN package for IBM PC compatible computers are described.

  18. Noncommutative Grobner basis, Hilbert series, Anick's resolution and BERGMAN under MS-DOS

    Directory of Open Access Journals (Sweden)

    S. Cojocaru

    1995-06-01

    Full Text Available The definition and main results connected with Grцbner basis, Hilbert series and Anick's resolution are formulated. The method of the infinity behavior prediction of Grцbner basis in noncommutative case is presented. The extensions of BERGMAN package for IBM PC compatible computers are described.

  19. A polyhedral approach to computing border bases

    CERN Document Server

    Braun, Gábor

    2009-01-01

    Border bases can be considered to be the natural extension of Gr\\"obner bases that have several advantages. Unfortunately, to date the classical border basis algorithm relies on (degree-compatible) term orderings and implicitly on reduced Gr\\"obner bases. We adapt the classical border basis algorithm to allow for calculating border bases for arbitrary degree-compatible order ideals, which is \\emph{independent} from term orderings. Moreover, the algorithm also supports calculating degree-compatible order ideals with \\emph{preference} on contained elements, even though finding a preferred order ideal is NP-hard. Effectively we retain degree-compatibility only to successively extend our computation degree-by-degree. The adaptation is based on our polyhedral characterization: order ideals that support a border basis correspond one-to-one to integral points of the order ideal polytope. This establishes a crucial connection between the ideal and the combinatorial structure of the associated factor spaces.

  20. Toric理想IAd的Gr(o)bner基%The Gr(o)bner Bases of Toric Ideal IAd

    Institute of Scientific and Technical Information of China (English)

    王羡

    2016-01-01

    给出Toric环、Toric理想的概念,利用已知的Gr(o)bner基求配置矩阵A的Toric理想IA的Gr(o)bner基.特别对一类无法用计算机计算其Gr(o)bner基的理想IAd,给出了它的Gr(o)bner基的具体形式并通过实例验证其结论.

  1. Algorithmic Algebraic Combinatorics and Gröbner Bases

    CERN Document Server

    Klin, Mikhail; Jurisic, Aleksandar

    2009-01-01

    This collection of tutorial and research papers introduces readers to diverse areas of modern pure and applied algebraic combinatorics and finite geometries with a special emphasis on algorithmic aspects and the use of the theory of Grobner bases. Topics covered include coherent configurations, association schemes, permutation groups, Latin squares, the Jacobian conjecture, mathematical chemistry, extremal combinatorics, coding theory, designs, etc. Special attention is paid to the description of innovative practical algorithms and their implementation in software packages such as GAP and MAGM

  2. THE λ-GR(O)BNER BASES UNDER POLYNOMIAL COMPOSITION

    Institute of Scientific and Technical Information of China (English)

    Jinwang LIU; Dongmei LI; Xiaosong CHEN

    2007-01-01

    Polynomial composition is the operation of replacing variables in a polynomial with other polynomials. λ-Gr(o)bner basis is an especial Gr(o)bner basis. The main problem in the paper is: when does composition commute with λ-Gr(o)bner basis computation? We shall answer better the above question. This has a natural application in the computation of λ-Gr(o)bner bases.

  3. Moment Matrices, Border Bases and Real Radical Computation

    CERN Document Server

    Lasserre, Jean-Bernard; Mourrain, Bernard; Rostalki, Philipp; Trébuchet, Philippe

    2011-01-01

    In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is finite. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-definite programming. While the border basis algorithms of [17] are efficient and numerically stable for computing complex roots, algorithms based on moment matrices [12] allow the incorporation of additional polynomials, e.g., to re- strict the computation to real roots or to eliminate multiple solutions. The proposed algorithm can be used to compute a border basis of the input ideal and, as opposed to other approaches, it can also compute the quotient structure of the (real) radical ideal directly, i.e., without prior algebraic techniques such as Gr\\"obner bases. It thus combines the strength of existing algorithms and provides a unified treatment for the computation of border bases for the ideal, the radical ideal and the real r...

  4. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC-design met......Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  5. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  6. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  7. Activity based costing (ABC Method

    Directory of Open Access Journals (Sweden)

    Prof. Ph.D. Saveta Tudorache

    2008-05-01

    Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.

  8. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  9. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  10. Activity – based costing method

    Directory of Open Access Journals (Sweden)

    Èuchranová Katarína

    2001-06-01

    Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.

  11. DISPLACEMENT BASED SEISMIC DESIGN METHODS.

    Energy Technology Data Exchange (ETDEWEB)

    HOFMAYER,C.MILLER,C.WANG,Y.COSTELLO,J.

    2003-07-15

    A research effort was undertaken to determine the need for any changes to USNRC's seismic regulatory practice to reflect the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The research explored the extent to which displacement based seismic design methods, such as given in FEMA 273, could be useful for reviewing nuclear power stations. Two structures common to nuclear power plants were chosen to compare the results of the analysis models used. The first structure is a four-story frame structure with shear walls providing the primary lateral load system, referred herein as the shear wall model. The second structure is the turbine building of the Diablo Canyon nuclear power plant. The models were analyzed using both displacement based (pushover) analysis and nonlinear dynamic analysis. In addition, for the shear wall model an elastic analysis with ductility factors applied was also performed. The objectives of the work were to compare the results between the analyses, and to develop insights regarding the work that would be needed before the displacement based analysis methodology could be considered applicable to facilities licensed by the NRC. A summary of the research results, which were published in NUREGICR-6719 in July 2001, is presented in this paper.

  12. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  13. An interactive segmentation method based on superpixel

    DEFF Research Database (Denmark)

    Yang, Shu; Zhu, Yaping; Wu, Xiaoyu

    2015-01-01

    This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....

  14. Digital Autofocusing Method Based on Contourlet Transform

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The autofocusing technique based on contourlet transform is discussed in this paper and an autofocusing method is proposed for images with much information in certain directions. The experimental results show that theproposed method can focus accurately and the sensitivity ratio is higher than that of the other autofocusing methods based on conventional image processing

  15. Research on BOM based composable modeling method

    NARCIS (Netherlands)

    Zhang, M.; He, Q.; Gong, J.

    2013-01-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling m

  16. Grobner Basis Approach to Some Combinatorial Problems

    Directory of Open Access Journals (Sweden)

    Victor Ufnarovski

    2012-10-01

    Full Text Available We consider several simple combinatorial problems and discuss different ways to express them using polynomial equations and try to describe the \\GB of the corresponding ideals. The main instruments are complete symmetric polynomials that help to express different conditions in rather compact way.

  17. Grobner Basis Approach to Some Combinatorial Problems

    OpenAIRE

    2012-01-01

    We consider several simple combinatorial problems and discuss different ways to express them using polynomial equations and try to describe the \\GB of the corresponding ideals. The main instruments are complete symmetric polynomials that help to express different conditions in rather compact way.

  18. Instance Based Methods --- A Brief Overview

    CERN Document Server

    Baumgartner, Peter; 10.1007/s13218-010-0002-x

    2012-01-01

    Instance-based methods are a specific class of methods for automated proof search in first-order logic. This article provides an overview of the major methods in the area and discusses their properties and relations to the more established resolution methods. It also discusses some recent trends on refinements and applications. This overview is rather brief and informal, but we provide a comprehensive literature list to follow-up on the details.

  19. Reliability-based concurrent subspace optimization method

    Institute of Scientific and Technical Information of China (English)

    FAN Hui; LI Wei-ji

    2008-01-01

    To avoid the high computational cost and much modification in the process of applying traditional re-liability-based design optimization method, a new reliability-based concurrent subspace optimization approach is proposed based on the comparison and analysis of the existing muhidisciplinary optimization techniques and reli-ability assessment methods. It is shown through a canard configuration optimization for a three-surface transport that the proposed method is computationally efficient and practical with the least modification to the current de-terministic optimization process.

  20. Decision making based on data analysis methods

    OpenAIRE

    Sirola, Miki; Sulkava, Mika

    2016-01-01

    This technical report is based on four our recent articles:"Data fusion of pre-election gallups and polls for improved support estimates", "Analyzing parliamentary elections based on voting advice application data", "The Finnish car rejection reasons shown in an interactive SOM visualization tool", and "Network visualization of car inspection data using graph layout". Neural methods are applied in political and technical decision making. We introduce decision support schemes based on Self-Org...

  1. Software Testing Method Based on Model Comparison

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  2. Wavelet-based Multiresolution Particle Methods

    Science.gov (United States)

    Bergdorf, Michael; Koumoutsakos, Petros

    2006-03-01

    Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.

  3. Recommendation advertising method based on behavior retargeting

    Science.gov (United States)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  4. Cloud Based Development Issues: A Methodical Analysis

    Directory of Open Access Journals (Sweden)

    Sukhpal Singh

    2012-11-01

    Full Text Available Cloud based development is a challenging task for various software engineering projects, especifically for those which demand extraordinary quality, reusability and security along with general architecture. In this paper we present a report on a methodical analysis of cloud based development problems published in major computer science and software engineering journals and conferences organized by various researchers. Research papers were collected from different scholarly databases using search engines within a particular period of time. A total of 89 research papers were analyzed in this methodical study and we categorized into four classes according to the problems addressed by them. The majority of the research papers focused on quality (24 papers associated with cloud based development and 16 papers focused on analysis and design. By considering the areas focused by existing authors and their gaps, untouched areas of cloud based development can be discovered for future research works.

  5. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  6. KNOWLEDGE BASED METHODS FOR VIDEO DATA RETRIEVAL

    OpenAIRE

    S.Thanga Ramya; P. Rangarajan

    2011-01-01

    Large collections of publicly available video data grow day by day, the need to query this dataefficiently becomes significant. Consequently, content-based retrieval of video data turns out to be achallenging and important problem. This paper addresses the specific aspect of inferring semanticsautomatically from raw video data using different knowledge-based methods. In particular, this paperfocuses on three techniques namely, rules, Hidden Markov Models (HMMs), and Dynamic BayesianNetworks (...

  7. Oil monitoring methods based on information theory

    Institute of Scientific and Technical Information of China (English)

    XIA Yan-chun; HUO Hua

    2009-01-01

    To evaluate the Wear condition of machines accurately,oil spectrographic entropy,mutual information and ICA analysis methods based on information theory are presented.A full-scale diagnosis utilizing all channels of spectrographic analysis can be obtained.By measuring the complexity and correlativity,the characteristics of wear condition of machines can be shown clearly.The diagnostic quality is improved.The analysis processes of these monitoring methods are given through the explanation of examples.The availability of these methods is validated and further research fields are demonstrated.

  8. Rough set-based feature selection method

    Institute of Scientific and Technical Information of China (English)

    ZHAN Yanmei; ZENG Xiangyang; SUN Jincai

    2005-01-01

    A new feature selection method is proposed based on the discern matrix in rough set in this paper. The main idea of this method is that the most effective feature, if used for classification, can distinguish the most number of samples belonging to different classes. Experiments are performed using this method to select relevant features for artificial datasets and real-world datasets. Results show that the selection method proposed can correctly select all the relevant features of artificial datasets and drastically reduce the number of features at the same time. In addition, when this method is used for the selection of classification features of real-world underwater targets,the number of classification features after selection drops to 20% of the original feature set, and the classification accuracy increases about 6% using dataset after feature selection.

  9. Deciding isomorphism of Lie algebras

    NARCIS (Netherlands)

    Graaf, W.A. de

    2001-01-01

    When doing calculations with Lie algebras one of the main problems is to decide whether two given Lie algebras are isomorphic. A partial solution to this problem is obtained by calculating structural invariants. There is also a direct method available which involves the computation of Grobner bases.

  10. An algorithm of computing inhomogeneous differential equations for definite integrals

    OpenAIRE

    Nakayama, Hiromasa; Nishiyama, Kenta

    2010-01-01

    We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.

  11. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  12. Facial Beautification Method Based on Age Evolution

    Institute of Scientific and Technical Information of China (English)

    CHEN Yan; DING Shou-hong; HU Gan-le; MA Li-zhuang

    2013-01-01

    This paper proposes a new facial beautification method using facial rejuvenation based on the age evolution. Traditional facial beautification methods only focus on the color of skin and deformation and do the transformation based on an experimental standard of beauty. Our method achieves the beauty effect by making facial image looks younger, which is different from traditional methods and is more reasonable than them. Firstly, we decompose the image into different layers and get a detail layer. Secondly, we get an age-related parameter:the standard deviation of the Gaussian distribution that the detail layer follows, and the support vector machine (SVM) regression is used to fit a function about the age and the standard deviation. Thirdly, we use this function to estimate the age of input image and generate a new detail layer with a new standard deviation which is calculated by decreasing the age. Lastly, we combine the original layers and the new detail layer to get a new face image. Experimental results show that this algo-rithm can make facial image become more beautiful by facial rejuvenation. The proposed method opens up a new way about facial beautification, and there are great potentials for applications.

  13. AN SVAD ALGORITHM BASED ON FNNKD METHOD

    Institute of Scientific and Technical Information of China (English)

    Chen Dong; Zhang Yan; Kuang Jingming

    2002-01-01

    The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.

  14. Treecode-Based Generalized Born Method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhenli [Shanghai Jiao Tong University, Shanghai; Cheng, Xiaolin [ORNL; Yang, Shihui [ORNL

    2011-01-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  15. An attribute-based image segmentation method

    Directory of Open Access Journals (Sweden)

    M.C. de Andrade

    1999-07-01

    Full Text Available This work addresses a new image segmentation method founded on Digital Topology and Mathematical Morphology grounds. The ABA (attribute based absorptions transform can be viewed as a region-growing method by flooding simulation working at the scale of the main structures of the image. In this method, the gray level image is treated as a relief flooded from all its local minima, which are progressively detected and merged as the flooding takes place. Each local minimum is exclusively associated to one catchment basin (CB. The CBs merging process is guided by their geometric parameters as depth, area and/or volume. This solution enables the direct segmentation of the original image without the need of a preprocessing step or the explicit marker extraction step, often required by other flooding simulation methods. Some examples of image segmentation, employing the ABA transform, are illustrated for uranium oxide samples. It is shown that the ABA transform presents very good segmentation results even in presence of noisy images. Moreover, it's use is often easier and faster when compared to similar image segmentation methods.

  16. Lagrangian based methods for coherent structure detection

    Energy Technology Data Exchange (ETDEWEB)

    Allshouse, Michael R., E-mail: mallshouse@chaos.utexas.edu [Center for Nonlinear Dynamics and Department of Physics, University of Texas at Austin, Austin, Texas 78712 (United States); Peacock, Thomas, E-mail: tomp@mit.edu [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States)

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  17. Chapter 11. Community analysis-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  18. A proposal to first principles electronic structure calculation: Symbolic-Numeric method

    CERN Document Server

    Kikuchi, Akihito

    2012-01-01

    This study proposes an approach toward the first principles electronic structure calculation with the aid of symbolic-numeric solving. The symbolic computation enables us to express the Hartree-Fock-Roothaan equation in an analytic form and approximate it as a set of polynomial equations. By use of the Grobner basis technique, the polynomial equations are transformed into other ones which have identical roots. The converted equations take more convenient forms which will simplify numerical procedures, from which we can derive necessary physical properties in order, in an a la carte way. This method enables us to solve the electronic structure calculation, the optimization of any kind, or the inverse problem as a forward problem in a unified way, in which there is no need for iterative self-consistent procedures with trials and errors.

  19. Custom Fusion Methode Based on Borda

    Directory of Open Access Journals (Sweden)

    Issam Abdelbaki

    2016-10-01

    Full Text Available Searching for information on the Internet is not only an activity newly rediscovered, but also a strategic tool to achieve a wide variety of information. Indeed, it’s extremely important to know how to find the information quickly and efficiently. Unfortunately, the Web is so huge and so little structured, that gathering precise, fair and useful information becomes an expensive task. In order to define an information retrieval tool (meta search engine that brings together multiple sources of information search, interest must be credited to the merger phase of search engines results. On the other hand, information search systems tend primarily to model the user with a profile and then to integrate it into the information access chain, to better meet its specific needs. This paper presents a custom fusion method based on Borda method and values retrieved from the user profile. We evaluated our approach on multiple domains and we present some experimental results.

  20. Trinocular Calibration Method Based on Binocular Calibration

    Directory of Open Access Journals (Sweden)

    CAO Dan-Dan

    2012-10-01

    Full Text Available In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error of the global calibration of the two camera pairs in the experiments can be as low as 0.277% and 0.328% respectively. Experiment results show that this method is feasible, simple and effective, and has high precision.

  1. Kernel method-based fuzzy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping

    2005-01-01

    The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.

  2. On Task-based English Learning Method

    Institute of Scientific and Technical Information of China (English)

    朱蕾

    2010-01-01

    @@ Task-Based learning(TBL)is becoming a catchword in English circles.The new national English Curricular Syllabus also recommends the use of the TBL approach in classroom teaching.The purpose of learning a foreign language is the most direct communicative in the target language,and speaking is the most direct communicative method.In recent years,with the publication of the New Curriculum Standard by the State Education Department,the teaching reform in middle and primary schools has been being implemented step by step.

  3. Bus Based Synchronization Method for CHIPPER Based NoC

    Directory of Open Access Journals (Sweden)

    D. Muralidharan

    2016-01-01

    Full Text Available Network on Chip (NoC reduces the communication delay of System on Chip (SoC. The main limitation of NoC is power consumption and area overhead. Bufferless NoC reduces the area complexity and power consumption by eliminating buffers in the traditional routers. The bufferless NoC design should include live lock freeness since they use hot potato routing. This increases the complexity of bufferless NoC design. Among the available propositions to reduce this complexity, CHIPPER based bufferless NoC is considered as one of the best options. Live lock freeness is provided in CHIPPER through golden epoch and golden packet. All routers follow some synchronization method to identify a golden packet. Clock based method is intuitively followed for synchronization in CHIPPER based NoCs. It is shown in this work that the worst-case latency of packets is unbearably high when the above synchronization is followed. To alleviate this problem, broadcast bus NoC (BBus NoC approach is proposed in this work. The proposed method decreases the worst-case latency of packets by increasing the golden epoch rate of CHIPPER.

  4. Graph-based Methods for Orbit Classification

    Energy Technology Data Exchange (ETDEWEB)

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  5. Pose measurement method based on geometrical constraints

    Institute of Scientific and Technical Information of China (English)

    Zimiao Zhang; Changku Sun; Pengfei Sun; Peng Wang

    2011-01-01

    @@ The pose estimation method based on geometric constraints is studied.The coordinates of the five feature points in the camera coordinate system are calculated to obtain the pose of an object on the basis of the geometric constraints formed by the connective lines of the feature points and the coordinates of the feature points on the CCD image plane; during the solution process,the scaling and orthography projection model is used to approximate the perspective projection model.%The pose estimation method based on geometric constraints is studied. The coordinates of the five feature points in the camera coordinate system are calculated to obtain the pose of an object on the basis of the geometric constraints formed by the connective lines of the feature points and the coordinates of the feature points on the CCD image plane; during the solution process, the scaling and orthography projection model is used to approximate the perspective projection model. The initial values of the coordinates of the five feature points in the camera coordinate system are obtained to ensure the accuracy and convergence rate of the non-linear algorithm. In accordance with the perspective projection characteristics of the circular feature landmarks, we propose an approach that enables the iterative acquisition of accurate target poses through the correction of the perspective projection coordinates of the circular feature landmark centers. Experimental results show that the translation positioning accuracy reaches ±0.05 mm in the measurement range of 0-40 mm, and the rotation positioning accuracy reaches ±0.06° in the measurement range of 4°-60°.

  6. Satellite Formation based on SDDF Method

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-04-01

    Full Text Available The technology of satellite formation flying has being a research focus in flight application. The relative position and velocity between satellites are basic parameters to achieve the control of formation flight during the satellite formation flying mission. In order to improve the navigation accuracy, a new filter different from Extended Kalman Filter (EKF should be adopted to estimate the errors of relative position and velocity, which is based on the nonlinearity of the kinetic model for the satellite formation flying. A nonlinear Divided Difference Filter (DDF based on Stirling interpolation formula was proposed in this paper. According to the linearity of the measurement equation for the filter, a simplified differential filter was designed by means of expanding the polynomial of the nonlinear system equation and linear approximating of the finite differential interpolation. Digital simulation experiment for the relative positioning of satellite formation flying was carried out. The result demonstrates that the filter proposed in this paper has a higher filtering accuracy, faster convergence speed and better stability. Compared with the EKF, the estimation accuracy of the relative position and velocity has improved by 77.1%and 47% respectively in the method of simplified DDF, which indicates the significance for practical applications. 

  7. A performance -based method for granular based method for granular -paste mix design

    NARCIS (Netherlands)

    Hoornahad, H.; Koenders, E.A.B.

    2014-01-01

    In this paper a performance-based method for the design of granular-paste mixtures will be proposed. Focus will be on the selection and proportioning of constituents to produce a mixture with a pre-defined shape holding ability. Shape holding ability of mixtures will be characterized by the shape

  8. Region-based multisensor image fusion method

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Image fusion should consider the priori knowledge of the source images to be fused, such as the characteristics of the images and the goal of image fusion, that is to say, the knowledge about the input data and the application plays a crucial role. This paper is concerned on multiresolution (MR) image fusion. Considering the characteristics of the multisensor (SAR and FLIR etc) and the goal of fusion, which is to achieve one image in possession of the contours feature and the target region feature. It seems more meaningful to combine features rather than pixels. A multisensor image fusion scheme based on K-means cluster and steerable pyramid is presented. K-means cluster is used to segment out objects in FLIR images. The steerable pyramid is a multiresolution analysis method, which has a good property to extract contours information at different scales. Comparisons are made with the relevant existing techniques in the literature. The paper concludes with some examples to illustrate the efficiency of the proposed scheme.

  9. Subjective evidence based ethnography: method and applications.

    Science.gov (United States)

    Lahlou, Saadi; Le Bellu, Sophie; Boesen-Mariani, Sabine

    2015-06-01

    Subjective Evidence Based Ethnography (SEBE) is a method designed to access subjective experience. It uses First Person Perspective (FPP) digital recordings as a basis for analytic Replay Interviews (RIW) with the participants. This triggers their memory and enables a detailed step by step understanding of activity: goals, subgoals, determinants of actions, decision-making processes, etc. This paper describes the technique and two applications. First, the analysis of professional practices for know-how transferring purposes in industry is illustrated with the analysis of nuclear power-plant operators' gestures. This shows how SEBE enables modelling activity, describing good and bad practices, risky situations, and expert tacit knowledge. Second, the analysis of full days lived by Polish mothers taking care of their children is described, with a specific focus on how they manage their eating and drinking. This research has been done on a sub-sample of a large scale intervention designed to increase plain water drinking vs sweet beverages. It illustrates the interest of SEBE as an exploratory technique in complement to other more classic approaches such as questionnaires and behavioural diaries. It provides the detailed "how" of the effects that are measured at aggregate level by other techniques.

  10. DNA-based methods of geochemical prospecting

    Science.gov (United States)

    Ashby, Matthew

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  11. Triptycene-based ladder monomers and polymers, methods of making each, and methods of use

    KAUST Repository

    Pinnau, Ingo

    2015-02-05

    Embodiments of the present disclosure provide for a triptycene-based A-B monomer, a method of making a triptycene-based A-B monomer, a triptycene-based ladder polymer, a method of making a triptycene-based ladder polymers, a method of using triptycene-based ladder polymers, a structure incorporating triptycene-based ladder polymers, a method of gas separation, and the like.

  12. New deghosting method based on generalized triangulation

    Institute of Scientific and Technical Information of China (English)

    Bai Jing; Wang Guohong; Xiu Jianjuan; Wang Xiaobo

    2009-01-01

    A new deghosting method baaed on the generalized triangulation is presented. First, two intersection points corresponding to the emitter position are obtained by utilizing two azimuth angles and two elevation angles from two jammed 3-D radars (or 2-D passive sensors). Then, hypothesis testing baaed deghosting method in the multiple target scenarios is proposed using the two intersection points. In order to analyze the performance of the proposed method, the correct association probability of the true targets and the incorrect association probability of the ghost targets are defined. Finally, the Monte Carlo simulations are given for the proposed method compared with the hinge angle method in the cases of both two and three radars. The simulation results show that the proposed method has better performance than the hinge angle method in three radars case.

  13. Multifractal Framework Based on Blanket Method

    Science.gov (United States)

    Paskaš, Milorad P.; Reljin, Irini S.; Reljin, Branimir D.

    2014-01-01

    This paper proposes two local multifractal measures motivated by blanket method for calculation of fractal dimension. They cover both fractal approaches familiar in image processing. The first two measures (proposed Methods 1 and 3) support model of image with embedded dimension three, while the other supports model of image embedded in space of dimension three (proposed Method 2). While the classical blanket method provides only one value for an image (fractal dimension) multifractal spectrum obtained by any of the proposed measures gives a whole range of dimensional values. This means that proposed multifractal blanket model generalizes classical (monofractal) blanket method and other versions of this monofractal approach implemented locally. Proposed measures are validated on Brodatz image database through texture classification. All proposed methods give similar classification results, while average computation time of Method 3 is substantially longer. PMID:24578664

  14. A microfluidic based optical particle detection method

    Science.gov (United States)

    Dou, James; Chen, Lu; Nayyar, Rakesh; Aitchison, Stewart

    2012-03-01

    An optical particle detection and analysis method is presented. This method combines the capillary microfluidics, integrated optics and novel image acquisition and analysis algorithms to form the basis of a portable or handheld cytometer instrument. Experimental results provided shows the testing results are closely matched with conventional flow cytometer data.

  15. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  16. Model-based methods for linkage analysis.

    Science.gov (United States)

    Rice, John P; Saccone, Nancy L; Corbett, Jonathan

    2008-01-01

    The logarithm of an odds ratio (LOD) score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential so that pedigrees or LOD curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders where the maximum LOD score statistic shares some of the advantages of the traditional LOD score approach, but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the LOD score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  17. New ITF measure method based on fringes

    Science.gov (United States)

    Fang, Qiaoran; Liu, Shijie; Gao, Wanrong; Zhou, You; Liu, HuanHuan

    2016-01-01

    With the unprecedented developments of the intense laser and aerospace projects', the interferometer is widely used in detecting middle frequency indicators of the optical elements, which put forward very high request towards the interferometer system transfer function (ITF). Conventionally, the ITF is measured by comparing the power spectra of known phase objects such as high-quality phase step. However, the fabrication of phase step is complex and high-cost, especially in the measurement of large-aperture interferometer. In this paper, a new fringe method is proposed to measure the ITF without additional objects. The frequency was changed by adjusting the number of fringes, and the normalized transfer function value was measured at different frequencies. The ITF value measured by fringe method was consistent with the traditional phase step method, which confirms the feasibility of proposed method. Moreover, the measurement error caused by defocus was analyzed. The proposed method does not require the preparation of a step artifact, which greatly reduces the test cost, and is of great significance to the ITF measurement of large aperture interferometer.

  18. LEVEL SET METHODS BASED ON DISTANCE FUNCTION

    Institute of Scientific and Technical Information of China (English)

    王德军; 唐云; 于洪川; 唐泽圣

    2003-01-01

    Some basic problems on the level set methods were discussed, such as the method used to preserve the distance function, the existence and uniqueness of solution for the level set equations. The main contribution is to prove that in a neighborhood of the initial zero level set, the level set equations with the restriction of the distance function have a unique solution, which must be the signed distance function with respect to the evolving surface. Some skillful approaches were used: Noticing that any solution for the original equation was a distance function, the original level set equations were transformed into a simpler alternative form. Moreover, since the new system was not a classical one, the system was transforned into an ordinary one, for which the implicit function method was adopted.

  19. AN EVEN COMPONENT BASED FACE RECOGNITION METHOD

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a novel face recognition algorithm. To provide additional variations to training data set, even-odd decomposition is adopted, and only the even components (half-even face images) are used for further processing. To tackle with shift-variant problem,Fourier transform is applied to half-even face images. To reduce the dimension of an image,PCA (Principle Component Analysis) features are extracted from the amplitude spectrum of half-even face images. Finally, nearest neighbor classifier is employed for the task of classification. Experimental results on ORL database show that the proposed method outperforms in terms of accuracy the conventional eigenface method which applies PCA on original images and the eigenface method which uses both the original images and their mirror images as training set.

  20. Topology-Based Methods in Visualization 2015

    CERN Document Server

    Garth, Christoph; Weinkauf, Tino

    2017-01-01

    This book presents contributions on topics ranging from novel applications of topological analysis for particular problems, through studies of the effectiveness of modern topological methods, algorithmic improvements on existing methods, and parallel computation of topological structures, all the way to mathematical topologies not previously applied to data analysis. Topological methods are broadly recognized as valuable tools for analyzing the ever-increasing flood of data generated by simulation or acquisition. This is particularly the case in scientific visualization, where the data sets have long since surpassed the ability of the human mind to absorb every single byte of data. The biannual TopoInVis workshop has supported researchers in this area for a decade, and continues to serve as a vital forum for the presentation and discussion of novel results in applications in the area, creating a platform to disseminate knowledge about such implementations throughout and beyond the community. The present volum...

  1. Valuing Convertible Bonds Based on LSRQM Method

    Directory of Open Access Journals (Sweden)

    Jian Liu

    2014-01-01

    Full Text Available Convertible bonds are one of the essential financial products for corporate finance, while the pricing theory is the key problem to the theoretical research of convertible bonds. This paper demonstrates how to price convertible bonds with call and put provisions using Least-Squares Randomized Quasi-Monte Carlo (LSRQM method. We consider the financial market with stochastic interest rates and credit risk and present a detailed description on calculating steps of convertible bonds value. The empirical results show that the model fits well the market prices of convertible bonds in China’s market and the LSRQM method is effective.

  2. A numerical method based on probability theory

    Institute of Scientific and Technical Information of China (English)

    唐立; 邹捷中; 杨文胜

    2003-01-01

    By using the connections between Brownian family with drift and elliptic differential equations, an efficient probabilistic computing method is given. This method is applied to a wide-range Diriehlet problem. Detail analysis and deduction of solving the problem are offered. The stochastic representation of the solution to the problem makes a 3-dimensional problem turned into a 2-dimensional problem. And an auxiliary ball is constructed. The strong Markov property and the joint distributions of the time and place of hitting spheres for Brownian family with drift are employed. Finally, good convergence of the numerical solution to the problem over domain with arbitrary boundary is obtained.

  3. HMM-Based Gene Annotation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  4. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    , as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  5. [Culture based diagnostic methods for tuberculosis].

    Science.gov (United States)

    Baylan, Orhan

    2005-01-01

    Culture methods providing isolates for identification and drug susceptibility testing, still represent the gold standard for the definitive diagnosis of tuberculosis, although the delay in obtaining results still remains a problem. Traditional solid media are recommended for use along with liquid media in primary isolation of mycobacteria. At present, a number of elaborate culture systems are available commercially. They range from simple bottles and tubes such as MGIT (BD Diagnostic Systems, USA), Septi-Chek AFB (BD, USA) and MB Redox (Biotest Diagnostics, USA) to semiautomated system (BACTEC 460TB, BD, USA) and fully automated systems (BACTEC 9000 MB [BD, USA], BACTEC MGIT 960 [BD, USA], ESP Culture System II [Trek Diagnostics, USA], MB/BacT ALERT 3D System [BioMérieux, NC], TK Culture System [Salubris Inc, Turkey]). Culture methods available today are sufficient to permit laboratories to develop an algoritm that is optimal for patients and administrative needs. In this review article, the culture systems used for the diagnosis of tuberculosis, their mechanisms, advantages and disadvantages have been discussed under the light of recent literature.

  6. Proposal for Scrambled Method based on NTRU

    Directory of Open Access Journals (Sweden)

    Ahmed Tariq Sadiq

    2015-08-01

    Full Text Available Scrambling is widely used to protect the security of data files such as text, image, video or audio files; however, it is not the most efficient method to protect the security of the data files. This article uses NTRU public key cryptosystem to increase the robustness of scrambling of sound files. In this work, we convert the sound file into text, and then scramble it in the following way: first, we encrypt the header of the sound file then, scramble the data of the file after the header in three stages. In each stage we scramble the data of the sound file and keep the original order of data in an array then, the three arrays are encrypted by the sender and sent with the encrypted header to the receiver in one file, while the scrambled data of the sound file is sent to the receiver in another file. We have tested the proposed method on several sound files; the results show that the time of encryption and decryption is reduced to approximately one-third, or less, compared to encrypting the file using NTRU.

  7. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  8. GALERKIN MESHLESS METHODS BASED ON PARTITION OF UNITY QUADRATURE

    Institute of Scientific and Technical Information of China (English)

    ZENG Qing-hong; LU De-tang

    2005-01-01

    Numerical quadrature is an important ingredient of Galerkin meshless methods. A new numerical quadrature technique, partition of unity quadrature (PUQ),for Galerkin meshless methods was presented. The technique is based on finite covering and partition of unity. There is no need to decompose the physical domain into small cell. It possesses remarkable integration accuracy. Using Element-free Galerkin methods as example, Galerkin meshless methods based on PUQ were studied in detail. Meshing is always not required in the procedure of constitution of approximate function or numerical quadrature, so Galerkin meshless methods based on PUQ are "truly"meshless methods.

  9. Adaptive Mixture Methods Based on Bregman Divergences

    CERN Document Server

    Donmez, Mehmet A; Kozat, Suleyman S

    2012-01-01

    We investigate adaptive mixture methods that linearly combine outputs of $m$ constituent filters running in parallel to model a desired signal. We use "Bregman divergences" and obtain certain multiplicative updates to train the linear combination weights under an affine constraint or without any constraints. We use unnormalized relative entropy and relative entropy to define two different Bregman divergences that produce an unnormalized exponentiated gradient update and a normalized exponentiated gradient update on the mixture weights, respectively. We then carry out the mean and the mean-square transient analysis of these adaptive algorithms when they are used to combine outputs of $m$ constituent filters. We illustrate the accuracy of our results and demonstrate the effectiveness of these updates for sparse mixture systems.

  10. Beam Parameters Measurement Based On Tv Methods

    CERN Document Server

    Klimenkov, E; Milichenko, Yu; Voevodin, V

    2004-01-01

    The paper describes hardware and software used to control TV-cameras and to process TV-images of luminescent screens placed along the beam transfer lines. Industrial devices manually control the movements and focusing of the cameras. All devices are linked to PC via PCI interfaces with homemade drivers for Linux OS and provide both selection of camera and digitizing of video signal synchronized with beam. One part of software provides means to set initial parameters using PC consol. Thus an operator can choose contrast, brightness, some number of significant points on TV-image to calculate beam position and its size. Second part supports remote TV controls and data processing from Control Rooms of U-70 complex using set initial parameters. First experience and results of the method realization are discussed.

  11. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  12. AN IMAGE RETRIEVAL METHOD BASED ON SPATIAL DISTRIBUTION OF COLOR

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Color histogram is now widely used in image retrieval. Color histogram-based image retrieval methods are simple and efficient but without considering the spatial distribution information of the color. To overcome the shortcoming of conventional color histogram-based image retrieval methods, an image retrieval method based on Radon Transform (RT) is proposed. In order to reduce the computational complexity,wavelet decomposition is used to compress image data. Firstly, images are decomposed by Mallat algorithm.The low-frequency components are then projected by RT to generate the spatial color feature. Finally the moment feature matrices which are saved along with original images are obtained. Experimental results show that the RT based retrieval is more accurate and efficient than traditional color histogram-based method in case that there are obvious objects in images. Further more, RT based retrieval runs significantly faster than the traditional color histogram methods.

  13. A New Color-based Lawn Weed Detection Method and Its Integration with Texture-based Methods: A Hybrid Approach

    Science.gov (United States)

    Watchareeruetai, Ukrit; Ohnishi, Noboru

    We propose a color-based weed detection method specifically designed for detecting lawn weeds in winter. The proposed method exploits fuzzy logic to make inference from color information. Genetic algorithm is adopted to search for the optimal combination of color information, fuzzy membership functions, as well as fuzzy rules used in the method. Experimental results show that the proposed color-based method outperforms the conventional texture-based methods when testing with a winter dataset. In addition, we propose a hybrid system that incorporates both texture-based and color-based weed detection methods. It can automatically select a better method to perform weed detection, depending on an input image. The results show that the use of the hybrid system can significantly improve weed control performances for the overall datasets.

  14. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  15. Subpixel edge detection method based on low-frequency filtering

    Science.gov (United States)

    Bylinsky, Yosip Y.; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2016-09-01

    A method of edge detection in images is proposed basing that based on low-frequency filtering. The method uses polynomial interpolation to determine the coordinates of the edge point with subpixel accuracy. Some experiments have been results also have been provided.

  16. An improved bit shuffling pixels-based image scrambling method

    Institute of Scientific and Technical Information of China (English)

    ZHAO Hong; WANG Hong-xia; WANG Jin

    2011-01-01

    @@ Compared with the Arnold transform, the image scrambling method based on bit shuffling pixels is much more secure, and has higher efficiency and speed.However, the key space of this bit shuffling pixels based method is too small to resist exhaustive search attack.Therefore, an improved method based on chaos is proposed in this paper.The security of the improved scheme is enhanced by increasing the number of the keys.Theoretical analysis and experimental results show that the proposed method is effective and has higher security.

  17. Online Fault Diagnosis Method Based on Nonlinear Spectral Analysis

    Institute of Scientific and Technical Information of China (English)

    WEI Rui-xuan; WU Li-xun; WANG Yong-chang; HAN Chong-zhao

    2005-01-01

    The fault diagnosis based on nonlinear spectral analysis is a new technique for the nonlinear fault diagnosis, but its online application could be limited because of the enormous compution requirements for the estimation of general frequency response functions. Based on the fully decoupled Volterra identification algorithm, a new online fault diagnosis method based on nonlinear spectral analysis is presented, which can availably reduce the online compution requirements of general frequency response functions. The composition and working principle of the method are described, the test experiments have been done for damping spring of a vehicle suspension system by utilizing the new method, and the results indicate that the method is efficient.

  18. A New Robust Image Matching Method Based on Distance Reciprocal

    Institute of Scientific and Technical Information of China (English)

    赵春江; 施文康; 邓勇

    2004-01-01

    Object matching between two-dimensional images is an important problem in computer vision. The purpose of object matching is to decide the similarity between two objects. A new robust image matching method based on distance reciprocal was presented. The distance reciprocal is based on human visual perception. This method is simple and effective. Moreover, it is robust against noise. The experiments show that this method outperforms the Hausdorff distance, when the images with noise interfered need to be recognized.

  19. Wind Turbine Gearbox Fault Diagnosis Method Based on Riemannian Manifold

    OpenAIRE

    Shoubin Wang; Xiaogang Sun; Chengwei Li

    2014-01-01

    As multivariate time series problems widely exist in social production and life, fault diagnosis method has provided people with a lot of valuable information in the finance, hydrology, meteorology, earthquake, video surveillance, medical science, and other fields. In order to find faults in time sequence quickly and efficiently, this paper presents a multivariate time series processing method based on Riemannian manifold. This method is based on the sliding window and uses the covariance mat...

  20. Method for detecting software anomalies based on recurrence plot analysis

    OpenAIRE

    Michał Mosdorf

    2012-01-01

    Presented paper evaluates method for detecting software anomalies based on recurrence plot analysis of trace log generated by software execution. Described method for detecting software anomalies is based on windowed recurrence quantification analysis for selected measures (e.g. Recurrence rate - RR or Determinism - DET). Initial results show that proposed method is useful in detecting silent software anomalies that do not result in typical crashes (e.g. exceptions).

  1. Method for detecting software anomalies based on recurrence plot analysis

    Directory of Open Access Journals (Sweden)

    Michał Mosdorf

    2012-03-01

    Full Text Available Presented paper evaluates method for detecting software anomalies based on recurrence plot analysis of trace log generated by software execution. Described method for detecting software anomalies is based on windowed recurrence quantification analysis for selected measures (e.g. Recurrence rate - RR or Determinism - DET. Initial results show that proposed method is useful in detecting silent software anomalies that do not result in typical crashes (e.g. exceptions.

  2. Droplet-based microfluidic method for synthesis of microparticles

    CSIR Research Space (South Africa)

    Mbanjwa, MB

    2012-10-01

    Full Text Available Droplet-based microfluidics has, in recent years, received increased attention as an important tool for performing numerous methods in modern day chemistry and biology such as the synthesis of hydrogel microparticles. Hydrogels have been used in many..., in recent years, received increased attention as an important tool for performing numerous methods in modern day chemistry and biology, such as synthesis of hydrogel microparticles. CONCLUSION AND OUTLOOK The droplet-based microfluidic method offers...

  3. Power quality events recognition using a SVM-based method

    Energy Technology Data Exchange (ETDEWEB)

    Cerqueira, Augusto Santiago; Ferreira, Danton Diego; Ribeiro, Moises Vidal; Duque, Carlos Augusto [Department of Electrical Circuits, Federal University of Juiz de Fora, Campus Universitario, 36036 900, Juiz de Fora MG (Brazil)

    2008-09-15

    In this paper, a novel SVM-based method for power quality event classification is proposed. A simple approach for feature extraction is introduced, based on the subtraction of the fundamental component from the acquired voltage signal. The resulting signal is presented to a support vector machine for event classification. Results from simulation are presented and compared with two other methods, the OTFR and the LCEC. The proposed method shown an improved performance followed by a reasonable computational cost. (author)

  4. Efficient Option Pricing Methods Based on Fourier Series Expansions

    Institute of Scientific and Technical Information of China (English)

    Deng DING; Sio Chong U

    2011-01-01

    A novel option pricing method based on Fourier-cosine series expansion was proposed by Fang and Oosterlee. Developing their idea, three new option pricing methods based on Fourier, Fourier-cosine and Fourier-sine series expansions are presented in this paper, which are more efficient when the option prices are calculated with many strike prices. A series of numerical experiments under different exp-Lévy models are also given to compare these new methods with the Fang and Oosterlee's method and other methods.

  5. Correlation theory-based signal processing method for CMF signals

    Science.gov (United States)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  6. Language Practitioners' Reflections on Method-Based and Post-Method Pedagogies

    Science.gov (United States)

    Soomro, Abdul Fattah; Almalki, Mansoor S.

    2017-01-01

    Method-based pedagogies are commonly applied in teaching English as a foreign language all over the world. However, in the last quarter of the 20th century, the concept of such pedagogies based on the application of a single best method in EFL started to be viewed with concerns by some scholars. In response to the growing concern against the…

  7. Highly sensitive methods for electroanalytical chemistry based on nanotubule membranes.

    Science.gov (United States)

    Kobayashi, Y; Martin, C R

    1999-09-01

    Two new methods of electroanalysis are described. These methods are based on membranes containing monodisperse Au nanotubules with inside diameters approaching molecular dimensions. In one method, the analyte species is detected by measuring the change in trans-membrane current when the analyte is added to the nanotubule-based cell. The second method entails the use of a concentration cell based on the nanotubule membrane. In this case, the change in membrane potential is used to detect the analyte. Detection limits as low as 10(-11) M have been achieved. Hence, these methods compete with even the most sensitive of modern analytical methodologies. In addition, excellent molecular-sized-based selectivity is observed.

  8. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    conditions and finding useful additives to hinder the color to change rapidly. To be able to prove which methods of storing and additives work, Danisco wants to monitor the development of the color of meat in a slice of ham as a function of time, environment and ingredients. We have chosen to use multi...... methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...

  9. Map-based control method for vehicle stability enhancement

    Institute of Scientific and Technical Information of China (English)

    Moon-Young Yoon; Seung-Hwan Baek; Kwang-Suk Boo; Heung-Seob Kim

    2015-01-01

    This work proposes a map-based control method to improve a vehicle’s lateral stability, and the performance of the proposed method is compared with that of the conventional model-referenced control method. Model-referenced control uses the sliding mode method to determine the compensated yaw moment;in contrast, the proposed map-based control uses the compensated yaw moment map acquired by vehicle stability analysis. The vehicle stability region is calculated by a topological method based on the trajectory reversal method. A 2-DOF vehicle model and Pacejka’s tire model are used to evaluate the proposed map-based control method. The properties of model-referenced control and map-based control are compared under various road conditions and driving inputs. Model-referenced control uses a control input to satisfy the linear reference model, and it generates unnecessary tire lateral forces that may lead to worse performance than an uncontrolled vehicle with step steering input on a road with a low friction coefficient. However, map-based control determines a compensated yaw moment to maintain the vehicle within the stability region, so the typical responses of vehicle enable to converge rapidly. The simulation results with sine and step steering show that map-based control provides better the tracking responsibility and control performance than model-referenced control.

  10. A New Method for Riccati Differential Equations Based on Reproducing Kernel and Quasilinearization Methods

    Directory of Open Access Journals (Sweden)

    F. Z. Geng

    2012-01-01

    Full Text Available We introduce a new method for solving Riccati differential equations, which is based on reproducing kernel method and quasilinearization technique. The quasilinearization technique is used to reduce the Riccati differential equation to a sequence of linear problems. The resulting sets of differential equations are treated by using reproducing kernel method. The solutions of Riccati differential equations obtained using many existing methods give good approximations only in the neighborhood of the initial position. However, the solutions obtained using the present method give good approximations in a larger interval, rather than a local vicinity of the initial position. Numerical results compared with other methods show that the method is simple and effective.

  11. Conceptual bases of the brand valuation by cost method

    Directory of Open Access Journals (Sweden)

    G.Y. Studinska

    2015-03-01

    Full Text Available The necessity of valuing intangible assets in accordance with international trends is substantiated. The brand is seen as more important component of intangible assets, as an effective management tool company. The benefits and uses of brand evaluation results are investigated. System monocriterion cost brand evaluation methods is analyzed. In particular, methods that require evaluation by the time factor (current and forecast methods and methods for factor comparison base (relative and absolute. The cost method of brand valuation through market transactions in accordance J.Common’s classification is considered in detail. The explanation of the difference between method a summation of all costs and method of brand valuation through market transactions is provided. The advantages and disadvantages considered cost method of brand valuation are investigated. The cost method as the relative-predicted of the brand valuation, «The method of determining the proportion of the brand from the discounted total costs» is grounded

  12. Multi-pattern Matching Methods Based on Numerical Computation

    Directory of Open Access Journals (Sweden)

    Lu Jun

    2013-01-01

    Full Text Available Multi-pattern matching methods based on numerical computation are advanced in this paper. Firstly it advanced the multiple patterns matching algorithm based on added information. In the process of accumulating of information, the select method of byte-accumulate operation will affect the collision odds , which means that the methods or bytes involved in the different matching steps should have greater differences as much as possible. In addition, it can use balanced binary tree to manage index to reduce the average searching times, and use the characteristics of a given pattern set by setting the collision field to eliminate collision further. In order to reduce the collision odds in the initial step, the information splicing method is advanced, which has greater value space than added information method, thus greatly reducing the initial collision odds. Multiple patterns matching methods based on numerical computation fits for large multi-pattern matching.

  13. A Spatialization-based Method for Checking and Updating Metadata

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In this paper the application of spatialization technology on metadata quality check and updating was discussed. A new method based on spatialization was proposed for checking and updating metadata to overcome the deficiency of text based methods with the powerful functions of spatial query and analysis provided by GIS software. This method employs the technology of spatialization to transform metadata into a coordinate space and the functions of spatial analysis in GIS to check and update spatial metadata in a visual environment. The basic principle and technical flow of this method were explained in detail, and an example of implementation using ArcMap of GIS software was illustrated with a metadata set of digital raster maps. The result shows the new method with the support of interaction of graph and text is much more intuitive and convenient than the ordinary text based method, and can fully utilize the functions of GIS spatial query and analysis with more accuracy and efficiency.

  14. Theory-Based Lexicographical Methods in a Functional Perspective

    DEFF Research Database (Denmark)

    Tarp, Sven

    2014-01-01

    This contribution provides an overview of some of the methods used in relation to the function theory. It starts with a definition of the concept of method and the relation existing between theory and method. It establishes an initial distinction between artisanal and theory-based methods...... of various methods used in the different sub-phases of the overall dictionary compilation process, from the making of the concept to the preparation for publication on the chosen media, with focus on the Internet. Finally, it briefly discusses some of the methods used to create and test the function theory...

  15. Fuzzy Clustering Method for Web User Based on Pages Classification

    Institute of Scientific and Technical Information of China (English)

    ZHAN Li-qiang; LIU Da-xin

    2004-01-01

    A new method for Web users fuzzy clustering based on analysis of user interest characteristic is proposed in this article.The method first defines page fuzzy categories according to the links on the index page of the site, then computes fuzzy degree of cross page through aggregating on data of Web log.After that, by using fuzzy comprehensive evaluation method, the method constructs user interest vectors according to page viewing times and frequency of hits, and derives the fuzzy similarity matrix from the interest vectors for the Web users.Finally, it gets the clustering result through the fuzzy clustering method.The experimental results show the effectiveness of the method.

  16. Data Mining and Knowledge Discovery via Logic-Based Methods

    CERN Document Server

    Triantaphyllou, Evangelos

    2010-01-01

    There are many approaches to data mining and knowledge discovery (DM&KD), including neural networks, closest neighbor methods, and various statistical methods. This monograph, however, focuses on the development and use of a novel approach, based on mathematical logic, that the author and his research associates have worked on over the last 20 years. The methods presented in the book deal with key DM&KD issues in an intuitive manner and in a natural sequence. Compared to other DM&KD methods, those based on mathematical logic offer a direct and often intuitive approach for extracting easily int

  17. Network Traffic Anomalies Identification Based on Classification Methods

    Directory of Open Access Journals (Sweden)

    Donatas Račys

    2015-07-01

    Full Text Available A problem of network traffic anomalies detection in the computer networks is analyzed. Overview of anomalies detection methods is given then advantages and disadvantages of the different methods are analyzed. Model for the traffic anomalies detection was developed based on IBM SPSS Modeler and is used to analyze SNMP data of the router. Investigation of the traffic anomalies was done using three classification methods and different sets of the learning data. Based on the results of investigation it was determined that C5.1 decision tree method has the largest accuracy and performance and can be successfully used for identification of the network traffic anomalies.

  18. Image mosaic method based on SIFT features of line segment.

    Science.gov (United States)

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  19. A Precise-Mask-Based Method for Enhanced Image Inpainting

    Directory of Open Access Journals (Sweden)

    Wanxu Zhang

    2016-01-01

    Full Text Available Mask of damage region is the pretreatment step of the image inpainting, which plays a key role in the ultimate effect. However, state-of-the-art methods have attached significance to the inpainting model, and the mask of damage region is usually selected manually or by the conventional threshold-based method. Since manual method is time-consuming and the threshold-based method does not have the same precision for different images, we herein report a new method for automatically constructing the precise mask by the joint filtering of guided filtering and L0 smoothing. It can accurately locate the boundary of damaged region in order to effectively segment the damage region and then greatly improves the ultimate effect of image inpainting. The experimental results show that the proposed method is superior to state-of-the-art methods in the step of constructing inpainting mask, especially for the damaged region with inconspicuous boundary.

  20. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion. 1. ... Recently, some methods based on artificial intelligence have been applied In order to improve ..... The effect is the reduction of.

  1. Peer Tutoring with QUICK Method vs. Task Based Method on Reading Comprehension Achievement

    Directory of Open Access Journals (Sweden)

    Sri Indrawati

    2017-07-01

    Full Text Available This study is a quasi-experimental research analyzing the reading comprehension achievement of the eleventh graders of Senior High School in Surabaya. This experimental research is comparing the effects of peer tutoring with QUICK method and task-based method to help the students to increase the students’ reading achievement. Besides for increasing the students’ reading achievement, this study has the main purpose to give a variation in teacher’s teaching reading techniques. This study uses independent samples t-test and paired samples t-test to indicate the students’ significant difference in achieving the reading comprehension in peer tutoring with QUICK method and task based method. Keywords: Peer tutoring with QUICK method, Task-based method, T-test, Reading achievement

  2. Ellipse-based shape description and retrieval method

    Institute of Scientific and Technical Information of China (English)

    李向阳; 潘云鹤

    2002-01-01

    Using a group of ellipses to approach the shape contour, a new shape retrieval method is presented in this paper. In order to keep shape-based retrieval invariant to its position, orientation and size, the shape normalization method is presented. From our research, any closed shape contour can be uniquely decomposed into a group of ellipses, and the original shape contour can be re-constructed using the decomposed ellipses. The ellipse-based shape description and similar retrieval method is introduced in this paper. Based on ellipse's contribution to shape contour, the decomposed ellipses are parted into low-order ellipses and high-order ellipses. The low-order ellipses measure the macroscopic feature of a shape contour, and the high-order ellipses measure the microscopic feature. The two-phase shape matching method is given. Through the experiment test, our method has better shape retrieval effect.

  3. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  4. Comparison of Two Distance Based Alignment Method in Medical Imaging

    Science.gov (United States)

    2001-10-25

    very helpful to register large datasets of contours or surfaces, commonly encountered in medical imaging . They do not require special ordering or...COMPARISON OF TWO DISTANCE BASED ALIGNMENT METHOD IN MEDICAL IMAGING G. Bulan, C. Ozturk Institute of Biomedical Engineering, Bogazici University...Two Distance Based Alignment Method in Medical Imaging Contract Number Grant Number Program Element Number Author(s) Project Number Task Number

  5. Memristor Crossbar-based Hardware Implementation of IDS Method

    OpenAIRE

    Merrikh-Bayat, Farnood; Bagheri-Shouraki, Saeed; Rohani, Ali

    2010-01-01

    Ink Drop Spread (IDS) is the engine of Active Learning Method (ALM), which is the methodology of soft computing. IDS, as a pattern-based processing unit, extracts useful information from a system subjected to modeling. In spite of its excellent potential in solving problems such as classification and modeling compared to other soft computing tools, finding its simple and fast hardware implementation is still a challenge. This paper describes a new hardware implementation of IDS method based o...

  6. A New Nonlinear Compound Forecasting Method Based on ANN

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper the compound-forecasting method is discussed. The compound-forecasting method is one of the hotspots in the current predication. Firstly, the compound-forecasting method is introduced and various existing compound-forecasting methods arediscussed. Secondly, the Artificial Neural Network (ANN) is brought in compound-prediction research and a nonlinear compound-prediction model based on ANN is presented. Finally, inorder to avoid irregular weight, a new method is presented which uses principal component analyses to increase the availability of compound-forecasting information. Higherforecasting precision is achieved in practice.

  7. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    Mignon Park

    2012-09-01

    Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

  8. Competition assignment problem algorithm based on Hungarian method

    Institute of Scientific and Technical Information of China (English)

    KONG Chao; REN Yongtai; GE Huiling; DENG Hualing

    2007-01-01

    Traditional Hungarian method can only solve standard assignment problems, while can not solve competition assignment problems. This article emphatically discussed the difference between standard assignment problems and competition assignment problems. The kinds of competition assignment problem algorithms based on Hungarian method and the solutions of them were studied.

  9. A Channelization-Based DOA Estimation Method for Wideband Signals.

    Science.gov (United States)

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-07-04

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  10. Noise reduction method based on weighted manifold decomposition

    Institute of Scientific and Technical Information of China (English)

    Gan Jian-Chao; Xiao Xian-Ci

    2004-01-01

    A noise reduction method based on weighted manifold decomposition is proposed in this paper, which does not need knowledge of the chaotic dynamics and choosing number of eigenvalues. The simulation indicates that the performance of this method can increase the signal-to-noise ratio of noisy chaotic time series.

  11. A CT Image Segmentation Algorithm Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    QU Jing-yi; SHI Hao-shan

    2006-01-01

    Level Set methods are robust and efficient numerical tools for resolving curve evolution in image segmentation. This paper proposes a new image segmentation algorithm based on Mumford-Shah module. The method is used to CT images and the experiment results demonstrate its efficiency and veracity.

  12. A Hybrid Positioning Method Based on Hypothesis Testing

    DEFF Research Database (Denmark)

    Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed

    2012-01-01

    maxima. We propose to first estimate the support region of the two peaks of the likelihood function using a set membership method, and then decide between the two regions using a rule based on the less reliable observations. Monte Carlo simulations show that the performance of the proposed method...

  13. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  14. Role-based Integration Method of Enterprise Information System

    Institute of Scientific and Technical Information of China (English)

    YU Ming-hui; FEI Qi; CHEN Xue-guang

    2002-01-01

    This paper analyzes the current situation of enterprise information system and methods of system integration at first. Then a role-based analyzing method is proposed. It can help confirm the keystone of the construction of information system and the direction of system integration. At last, a case study on theintegration of material dispatching information system in a large-scale project is presented briefly. It shows that this new method is more effective than the others are.

  15. Reliability-based design optimization with Cross-Entropy method

    OpenAIRE

    Ghidey, Hiruy

    2015-01-01

    Implementation of the Cross-entropy (CE) method to solve reliability-based design optimization (RBDO) problems was investigated. The emphasis of this implementation method was to solve independently both the reliability and optimization sub-problems within the RBDO problem; therefore, the main aim of this study was to evaluate the performance of the Cross-entropy method in terms of efficiency and accuracy to solve RBDO problems. A numerical approach was followed in which the implementatio...

  16. A New Video Coding Method Based on Improving Detail Regions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Moving Pictures Expert Group (MPEG) and H.263 standard coding method is widely used in video compression. However, the visual quality of detail regions such as eyes and mouth is not content in people at the decoder, as far as the conference telephone or videophone is concerned. A new coding method based on improving detail regions is presented in this paper. Experimental results show that this method can improve the visual quality at the decoder.

  17. A Semantic Retrieval Method Based on the Fuzzy Reasoning

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper gives a semantic fuzzy retrieval method of multimedia object,discusses the principle of fuzzy semantic retrieval technique,presents a fuzzy reasoning mechanism based on the knowledge base,and designs the relevant reasoning algorithms.Researchful results have innovative significance.

  18. A method for selecting training samples based on camera response

    Science.gov (United States)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  19. Method of designing developable surface based on engineering requirement

    Institute of Scientific and Technical Information of China (English)

    YANG Ji-xin; LIU Zhe; LIU Jian

    2006-01-01

    The paper deals with the principle of envelope of a one-parameter plane family to design developable surfaces. Three methods of designing developable surfaces are presented. They are designing a developable surface based on one curve on it and its normal line, designing a developable surface based on two curves on it and designing a developable surface based on one curve and one surface. They meet the requirements of engineering fields.

  20. An Efficient Method for Reliability-based Multidisciplinary Design Optimization

    Institute of Scientific and Technical Information of China (English)

    Fan Hui; Li Weiji

    2008-01-01

    Design for modem engineering system is becoming multidisciplinary and incorporates practical uncertainties; therefore, it is necessary to synthesize reliability analysis and the multidiscipLinary design optimization (MDO) techniques for the design of complex engineering system. An advanced first order second moment method-based concurrent subspace optimization approach is proposed based on the comparison and analysis of the existing multidisciplinary optimization techniques and the reliability analysis methods. It is seen through a canard configuration optimization for a three-surface transport that the proposed method is computationally efficient and practical with the least modification to the current deterministic optimization process.

  1. A new earthquake location method based on the waveform inversion

    CERN Document Server

    Wu, Hao; Huang, Xueyuan; Yang, Dinghui

    2016-01-01

    In this paper, a new earthquake location method based on the waveform inversion is proposed. As is known to all, the waveform misfit function is very sensitive to the phase shift between the synthetic waveform signal and the real waveform signal. Thus, the convergence domain of the conventional waveform based earthquake location methods is very small. In present study, by introducing and solving a simple sub-optimization problem, we greatly expand the convergence domain of the waveform based earthquake location method. According to a large number of numerical experiments, the new method expands the range of convergence by several tens of times. This allows us to locate the earthquake accurately even from some relatively bad initial values.

  2. Patch nearfield acoustic holography based on the equivalent source method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    On the basis of nearfield acoustic holography (NAH) based on the equivalent source method (ESM), patch NAH based on the ESM is proposed. The method overcomes the shortcoming in the conventional NAH that the hologram surface should be larger than the source surface. It need not to discretize the whole source and its measurement need not to cover the whole source. The measurement may be performed over the region of interest, and the reconstruction can be done in the region directly. The method is flexible in applications, stable in computation, and very easy to implement. It has good potential applications in engineering. The nu- merical simulations show the invalidity of the conventional NAH based on the ESM and prove the validities of the proposed method for reconstructing a partial source and the regularization for reducing the error effect of the pressure measured on the hologram surface.

  3. Evaluation of read count based RNAseq analysis methods.

    Science.gov (United States)

    Guo, Yan; Li, Chung-I; Ye, Fei; Shyr, Yu

    2013-01-01

    RNAseq technology is replacing microarray technology as the tool of choice for gene expression profiling. While providing much richer data than microarray, analysis of RNAseq data has been much more challenging. To date, there has not been a consensus on the best approach for conducting robust RNAseq analysis. In this study, we designed a thorough experiment to evaluate six read count-based RNAseq analysis methods (DESeq, DEGseq, edgeR, NBPSeq, TSPM and baySeq) using both real and simulated data. We found the six methods produce similar fold changes and reasonable overlapping of differentially expressed genes based on p-values. However, all six methods suffer from over-sensitivity. Based on the evaluation of runtime using real data and area under the receiver operating characteristic curve (AUC-ROC) using simulated data, we found that edgeR achieves a better balance between speed and accuracy than the other methods.

  4. WLAN indoor location method based on artificial neural network

    Institute of Scientific and Technical Information of China (English)

    Zhou Mu; Sun Ying; Xu Yubin; Deng Zhian; Meng Weixiao

    2010-01-01

    WLAN indoor location method based on artificial neural network (ANN) is analyzed.A three layer feed-forward ANN model offers the benefits of reducing time cost of the layout of an indoor location system, saving storage cost of the radio map establishment and enhancing real-time capacity in the on-line phase.According to the analysis of SNR distributions of recorded beacon signal samples and discussion about the multi-mode phenomenon, the one map method is proposed for the purpose of simplifying ANN input values and increasing location performances.Based on the simulations and comparison analysis with other two typical indoor location methods, K-nearest neighbor (KNN) and probability, the feasibility and effectiveness of ANN-based indoor location method are verified with average location error of 2.37m and location accuracy of 78.6% in 3m.

  5. A HMM-Based Method for Vocal Fold Pathology Diagnosis

    Directory of Open Access Journals (Sweden)

    Vahid Majidnezhad

    2012-11-01

    Full Text Available Acoustic analysis is a proper method in vocal fold pathology diagnosis so that it can complement and in some cases replace the other invasive, based on direct vocal fold observations methods. There are different approaches for vocal fold pathology diagnosis. This paper presents a method based on hidden markov model which classifies speeches into two classes: the normal and the pathological. Two hidden markov models are trained based on these two classes of speech and then the trained models are used to classify the dataset. The proposed method is able to classify the speeches with an accuracy of 93.75%. The results of this algorithm provide insights that can help biologists and computer scientists design high-performance system for detection of vocal fold pathology diagnosis.

  6. Distance Based Method for Outlier Detection of Body Sensor Networks

    Directory of Open Access Journals (Sweden)

    Haibin Zhang

    2016-01-01

    Full Text Available We propose a distance based method for the outlier detection of body sensor networks. Firstly, we use a Kernel Density Estimation (KDE to calculate the probability of the distance to k nearest neighbors for diagnosed data. If the probability is less than a threshold, and the distance of this data to its left and right neighbors is greater than a pre-defined value, the diagnosed data is decided as an outlier. Further, we formalize a sliding window based method to improve the outlier detection performance. Finally, to estimate the KDE by training sensor readings with errors, we introduce a Hidden Markov Model (HMM based method to estimate the most probable ground truth values which have the maximum probability to produce the training data. Simulation results show that the proposed method possesses a good detection accuracy with a low false alarm rate.

  7. Key Updating Methods for Combinatorial Design Based Key Management Schemes

    Directory of Open Access Journals (Sweden)

    Chonghuan Xu

    2014-01-01

    Full Text Available Wireless sensor network (WSN has become one of the most promising network technologies for many useful applications. However, for the lack of resources, it is different but important to ensure the security of the WSNs. Key management is a corner stone on which to build secure WSNs for it has a fundamental role in confidentiality, authentication, and so on. Combinatorial design theory has been used to generate good-designed key rings for each sensor node in WSNs. A large number of combinatorial design based key management schemes have been proposed but none of them have taken key updating into consideration. In this paper, we point out the essence of key updating for the unital design based key management scheme and propose two key updating methods; then, we conduct performance analysis on the two methods from three aspects; at last, we generalize the two methods to other combinatorial design based key management schemes and enhance the second method.

  8. Patch nearfield acoustic holography based on the equivalent source method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    On the basis of nearfield acoustic holography (NAH) based on the equivalent source method (ESM), patch NAH based on the ESM is proposed. The method overcomes the shortcoming in the conventional NAH that the hologram surface should be larger than the source surface. It need not to discretize the whole source and its measurement need not to cover the whole source. The measurement may be performed over the region of interest, and the reconstruction can be done in the region directly. The method is flexible in applications, stable in computation, and very easy to implement. It has good potential applications in engineering. The numerical simulations show the invalidity of the conventional NAH based on the ESM and prove the validities of the proposed method for reconstructing a partial source and the regularization for reducing the error effect of the pressure measured on the hologram surface.

  9. An overview of modal-based damage identification methods

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, C.R.; Doebling, S.W. [Los Alamos National Lab., NM (United States). Engineering Analysis Group

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  10. Integrated navigation method based on inertial navigation system and Lidar

    Science.gov (United States)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  11. Wind Turbine Gearbox Fault Diagnosis Method Based on Riemannian Manifold

    Directory of Open Access Journals (Sweden)

    Shoubin Wang

    2014-01-01

    Full Text Available As multivariate time series problems widely exist in social production and life, fault diagnosis method has provided people with a lot of valuable information in the finance, hydrology, meteorology, earthquake, video surveillance, medical science, and other fields. In order to find faults in time sequence quickly and efficiently, this paper presents a multivariate time series processing method based on Riemannian manifold. This method is based on the sliding window and uses the covariance matrix as a descriptor of the time sequence. Riemannian distance is used as the similarity measure and the statistical process control diagram is applied to detect the abnormity of multivariate time series. And the visualization of the covariance matrix distribution is used to detect the abnormity of mechanical equipment, leading to realize the fault diagnosis. With wind turbine gearbox faults as the experiment object, the fault diagnosis method is verified and the results show that the method is reasonable and effective.

  12. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    Science.gov (United States)

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  13. NONLINEAR DATA RECONCILIATION METHOD BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In the industrial process situation, principal component analysis (PCA) is a general method in data reconciliation.However, PCA sometime is unfeasible to nonlinear feature analysis and limited in application to nonlinear industrial process.Kernel PCA (KPCA) is extension of PCA and can be used for nonlinear feature analysis.A nonlinear data reconciliation method based on KPCA is proposed.The basic idea of this method is that firstly original data are mapped to high dimensional feature space by nonlinear function, and PCA is implemented in the feature space.Then nonlinear feature analysis is implemented and data are reconstructed by using the kernel.The data reconciliation method based on KPCA is applied to ternary distillation column.Simulation results show that this method can filter the noise in measurements of nonlinear process and reconciliated data can represent the true information of nonlinear process.

  14. NETWORK INTRUSION DETECTION METHOD BASED ON RS-MSVM

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun; Han Chongzhao; Zheng Qinghua; Zhang Junjie

    2006-01-01

    A new method called RS-MSVM (Rough Set and Multi-class Support Vector Machine) is proposed for network intrusion detection. This method is based on rough set followed by MSVM for attribute reduction and classification respectively. The number of attributes of the network data used in this paper is reduced from 41 to 30 using rough set theory. The kernel function of HVDM-RBF (Heterogeneous Value Difference Metric Radial Basis Function), based on the heterogeneous value difference metric of heterogeneous datasets, is constructed for the heterogeneous network data. HVDM-RBF and one-against-one method are applied to build MSVM. DARPA (Defense Advanced Research Projects Agency) intrusion detection evaluating data were used in the experiment. The testing results show that our method outperforms other methods mentioned in this paper on six aspects: detection accuracy, number of support vectors, false positive rate, false negative rate, training time and testing time.

  15. A Novel Method for Solving KdV Equation Based on Reproducing Kernel Hilbert Space Method

    Directory of Open Access Journals (Sweden)

    Mustafa Inc

    2013-01-01

    Full Text Available We propose a reproducing kernel method for solving the KdV equation with initial condition based on the reproducing kernel theory. The exact solution is represented in the form of series in the reproducing kernel Hilbert space. Some numerical examples have also been studied to demonstrate the accuracy of the present method. Results of numerical examples show that the presented method is effective.

  16. A perceptual hashing method based on luminance features

    Science.gov (United States)

    Luo, Siqing

    2011-02-01

    With the rapid development of multimedia technology, content based searching and image authentication has become strong requirements. Image hashing technique has been proposed to meet them. In this paper, an RST (Rotation, Scaling, and Translation) resistant image hash algorithm is presented. In this method, the geometric distortions are extracted and adjusted by normalization. The features of the image are generated from the high-rank moments of luminance distribution. With the help of the efficient image representation capability of high-rank moments, the robustness and discrimination of proposed method are improved. The experimental results show that the proposed method is better than some existing methods in robustness under rotation attack.

  17. Color Restoration Method Based on Spectral Information Using Normalized Cut

    Institute of Scientific and Technical Information of China (English)

    Tetsuro Morimoto; Tohru Mihashi; Katsushi Ikeuchi

    2008-01-01

    This paper proposes a novel method for color restoration that can effectively apply accurate color based on spectral information to a segmented image using the normalized cut technique. Using the proposed method, we can obtain a digital still camera image and spectral information in different environments. Also, it is not necessary to estimate reflectance spectra using a spectral database such as other methods. The synthesized images are accurate and high resolution. The proposed method effectively works in making digital archive contents. Some experimental results are demonstrated in this paper.

  18. FUZZY IDENTIFICATION METHOD BASED ON A NEW OBJECTIVE FUNCTION

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A method of fuzzy identification based on a new objective function is proposed. The method could deal with the issue that input variables of a system have an effect on the input space while output variables of the system do not exert an influence on the input space in the proposed objective functions of fuzzy clustering. The method could simultaneously solve the problems about structure identification and parameter estimation; thus it makes the fuzzy model become optimal. Simulation example demonstrates that the method could identify non-linear systems and obviously improve modeling accuracy.

  19. A reservoir skeleton-based multiple point geostatistics method

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Traditional stochastic reservoir modeling,including object-based and pixel-based methods,cannot solve the problem of reproducing continuous and curvilinear reservoir objects. The paper first dives into the various stochastic modeling methods and extracts their merits,then proposes the skeleton-based multiple point geostatistics(SMPS) for the fluvial reservoir. The core idea is using the skeletons of reservoir objects to restrict the selection of data patterns. The skeleton-based multiple point geostatistics consists of two steps. First,predicting the channel skeleton(namely,channel centerline) by using the method in object-based modeling. The paper proposes a new method of search window to predict the skeleton. Then forecasting the distributions of reservoir objects using multiple point geostatistics with the restriction of channel skeleton. By the restriction of channel centerline,the selection of data events will be more reasonable and the realization will be achieved more really. The checks by the conceptual model and the real reservoir show that SMPS is much better than Sisim(sequential indicator simulation) ,Snesim(Single Normal Equation Simulation) and Simpat(simulation with patterns) in building the fluvial reservoir model. This new method will contribute to both the theoretical research of stochastic modeling and the oilfield developments of constructing highly precise reservoir geological models.

  20. NOVEL RADAR SIGNAL SORTING METHOD BASED ON GEOMETRIC COVERING

    Institute of Scientific and Technical Information of China (English)

    万建; 国强; 宋文明

    2013-01-01

    With the increase of complexity of electromagnetic environment and continuous appearance of advanced system radars ,signals received by radar reconnaissance receivers become even more intensive and complex .There-fore ,traditional radar sorting methods based on neural network algorithms and support vector machine (SVM ) cannot process them effectively .Aiming at solving this problem ,a novel radar signal sorting method based on the cloud model theory and the geometric covering algorithm is proposed .By applying the geometric covering algo-rithm to divide input signals into different covering domains based on their distribution characteristics ,the method can overcome a typical problem that it is easy for traditional sorting algorithms to fall into the local extrema due to the use of complex nonlinear equation to describe input signals .The method uses the cloud model to describe the membership degree between signals to be sorted and their covering domains ,thus it avoids the disadvantage that traditional sorting methods based on hard clustering cannot deinterleave the signal samples with overlapped param-eters .Experimental results show that the presented method can effectively sort advanced system radar signals with overlapped parameters in complex electromagnetic environment .

  1. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  2. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition.In this paper,we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images.For flower retrieval,we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets,Centroid-Contour Distance(CCD)and Angle Code Histogram(ACH),to characterize the shape features of a flower contour.Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions.Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest(ROD based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard(1991)and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  3. A hybrid method for pancreas extraction from CT image based on level set methods.

    Science.gov (United States)

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  4. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  5. Testability integrated evaluation method based on testability virtual test data

    Institute of Scientific and Technical Information of China (English)

    Liu Guanjun; Zhao Chenxu; Qiu Jing; Zhang Yong

    2014-01-01

    Testability virtual test is a new test method for testability verification, which has the advantages such as low cost, few restrictions and large sample of test data. It can be used to make up the deficiency of testability physical test. In order to take the advantage of testability virtual test data effectively and to improve the accuracy of testability evaluation, a testability integrated eval-uation method is proposed in this paper based on testability virtual test data. Considering the char-acteristic of testability virtual test data, the credibility analysis method for testability virtual test data is studied firstly. Then the integrated calculation method is proposed fusing the testability vir-tual and physical test data. Finally, certain helicopter heading and attitude system is presented to demonstrate the proposed method. The results show that the testability integrated evaluation method is feasible and effective.

  6. A Method of Image Symmetry Detection Based on Phase Information

    Institute of Scientific and Technical Information of China (English)

    WU Jun; YANG Zhaoxuan; FENG Dengchao

    2005-01-01

    Traditional methods for detecting symmetry in image suffer greatly from the contrast of image and noise, and they all require some preprocessing. This paper presents a new method of image symmetry detection. This method detects symmetry with phase information utilizing logGabor wavelets, because phase information is stable and significant, while symmetric points produce patterns easy to be recognised and confirmable in local phase. Phase method does not require any preprocessing, and its result is accurate or invariant to contrast, rotation and illumination conditions. This method can detect mirror symmetry, rotating symmetry and curve symmetry at one time. Results of experiment show that, compared with pivotal element algorithm based on intensity information, phase method is more accurate and robust.

  7. PPA BASED PREDICTION-CORRECTION METHODS FOR MONOTONE VARIATIONAL INEQUALITIES

    Institute of Scientific and Technical Information of China (English)

    He Bingsheng; Jiang Jianlin; Qian Maijian; Xu Ya

    2005-01-01

    In this paper we study the proximal point algorithm (PPA) based predictioncorrection (PC) methods for monotone variational inequalities. Each iteration of these methods consists of a prediction and a correction. The predictors are produced by inexact PPA steps. The new iterates are then updated by a correction using the PPA formula. We present two profit functions which serve two purposes: First we show that the profit functions are tight lower bounds of the improvements obtained in each iteration. Based on this conclusion we obtain the convergence inexactness restrictions for the prediction step. Second we show that the profit functions are quadratically dependent upon the step lengths, thus the optimal step lengths are obtained in the correction step. In the last part of the paper we compare the strengths of different methods based on their inexactness restrictions.

  8. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  9. Three Methods for Occupation Coding Based on Statistical Learning

    Directory of Open Access Journals (Sweden)

    Gweon Hyukjun

    2017-03-01

    Full Text Available Occupation coding, an important task in official statistics, refers to coding a respondent’s text answer into one of many hundreds of occupation codes. To date, occupation coding is still at least partially conducted manually, at great expense. We propose three methods for automatic coding: combining separate models for the detailed occupation codes and for aggregate occupation codes, a hybrid method that combines a duplicate-based approach with a statistical learning algorithm, and a modified nearest neighbor approach. Using data from the German General Social Survey (ALLBUS, we show that the proposed methods improve on both the coding accuracy of the underlying statistical learning algorithm and the coding accuracy of duplicates where duplicates exist. Further, we find defining duplicates based on ngram variables (a concept from text mining is preferable to one based on exact string matches.

  10. Robust Speech Recognition Method Based on Discriminative Environment Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    HAN Jiqing; GAO Wen

    2001-01-01

    It is an effective approach to learn the influence of environmental parameters,such as additive noise and channel distortions, from training data for robust speech recognition.Most of the previous methods are based on maximum likelihood estimation criterion. However,these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum ClassificationError (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental parameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.

  11. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  12. Analysis of Dynamic Modeling Method Based on Boundary Element

    Directory of Open Access Journals (Sweden)

    Xu-Sheng Gan

    2013-07-01

    Full Text Available The aim of this study was to study an improved dynamic modeling method based on a Boundary Element Method (BEM. The dynamic model was composed of the elements such as the beam element, plate element, joint element, lumped mass and spring element by the BEM. An improved dynamic model of a machine structure was established based on plate-beam element system mainly. As a result, the dynamic characteristics of a machine structure were analyzed and the comparison of computational results and experimental’s showed the modeling method was effective. The analyses indicate that the introduced method inaugurates a good way for analyzing dynamic characteristics of a machine structure efficiently.

  13. Simple noise-reduction method based on nonlinear forecasting

    Science.gov (United States)

    Tan, James P. L.

    2017-03-01

    Nonparametric detrending or noise reduction methods are often employed to separate trends from noisy time series when no satisfactory models exist to fit the data. However, conventional noise reduction methods depend on subjective choices of smoothing parameters. Here we present a simple multivariate noise reduction method based on available nonlinear forecasting techniques. These are in turn based on state-space reconstruction for which a strong theoretical justification exists for their use in nonparametric forecasting. The noise reduction method presented here is conceptually similar to Schreiber's noise reduction method using state-space reconstruction. However, we show that Schreiber's method has a minor flaw that can be overcome with forecasting. Furthermore, our method contains a simple but nontrivial extension to multivariate time series. We apply the method to multivariate time series generated from the Van der Pol oscillator, the Lorenz equations, the Hindmarsh-Rose model of neuronal spiking activity, and to two other univariate real-world data sets. It is demonstrated that noise reduction heuristics can be objectively optimized with in-sample forecasting errors that correlate well with actual noise reduction errors.

  14. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  15. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  16. Propagator-based methods for recursive subspace model identification

    OpenAIRE

    Mercère, Guillaume; Bako, Laurent; Lecoeuche, Stéphane

    2008-01-01

    International audience; The problem of the online identification of multi-input multi-output (MIMO) state-space models in the framework of discrete-time subspace methods is considered in this paper. Several algorithms, based on a recursive formulation of the MIMO output error state-space (MOESP) identification class, are developed. The main goals of the proposed methods are to circumvent the huge complexity of eigenvalues or singular values decomposition techniques used by the offline algorit...

  17. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    OpenAIRE

    Changyong Cao; Qing-Hua Qin

    2015-01-01

    An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM) and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field) are employed. The formulations for...

  18. A lattice Boltzmann method based on generalized polynomials

    CERN Document Server

    Coelho, Rodrigo C V; Doria, Mauro M

    2015-01-01

    We propose a lattice Boltzmann method based on the expansion of the equilibrium distribution function in powers of generalized orthonormal polynomials which are weighted by the equilibrium distribution function itself. The D-dimensional Euclidean space Hermite polynomials correspond to the particular weight of a gaussian function. The proposed polynomials give a general method to obtain an expansion of the equilibrium distribution function in powers of the ratio between the displacement velocity and the local scale velocity of the fluid.

  19. A robust tolerance design method based on process capability

    Institute of Scientific and Technical Information of China (English)

    曹衍龙; 杨将新; 吴昭同; 吴立群

    2004-01-01

    This paper presents a method for robust tolerance design in terms of Process Capability Indices (PCI) . The component tolerance and the suitable manufacturing processes can be selected based on the real manufacturing context. The robustness of design feasibility under the effect of uncertainties is also discussed. A comparison between the results obtained by the proposed model and other methods indicates that robust and reliable tolerance can be obtained.

  20. A robust tolerance design method based on process capability

    Institute of Scientific and Technical Information of China (English)

    CAO Yan-long(曹衍龙); YANG Jiang-xin(杨将新); WU Zhao-tong(吴昭同); WU Li-qun(吴立群)

    2004-01-01

    This paper presents a method for robust tolerance design in terms of Process Capability Indices (PCI). The component tolerance and the suitable manufacturing processes can be selected based on the real manufacturing context. The robustness of design feasibility under the effect of uncertainties is also discussed. A comparison between the results obtained by the proposed model and other methods indicates that robust and reliable tolerance can be obtained.

  1. Progress of DNA-based Methods for Species Identification

    Institute of Scientific and Technical Information of China (English)

    HU Zhen; ZHANG Su-hua; WANG Zheng; BIAN Ying-nan; LI Cheng-tao

    2015-01-01

    Species identification of biological samples is widely used in such fields as forensic science and food industry. A variety of accurate and reliable methods have been developed in recent years. The cur-rent reviewshows common target genes and screening criteria suitable for species identification, and de-scribed various DNA-based molecular biology methods about species identification. Additionally, it dis-cusses the future development of species identification combined with real-time PCR and sequencing technologies.

  2. New de-interlacing method based on adaptive weight

    Institute of Scientific and Technical Information of China (English)

    赵建伟; 古雪丰; 王朋; 刘重庆

    2004-01-01

    De-interlacing is very important when converting interlaced pictures to progressive pictures in format conversion.Multi-formats digital broadcast and progressive display requires the de-interlacing technique. An adaptive weight deinterlacing method is proposed. It combines motion compensation technique with directional-based spatio-temporal filter efficiently. Experiment results indicate that the method can keep edge continuity and sharpness effectively, reduce the artifacts in motion areas, and shows better visual performance when the estimated motion vectors are inaccurate.

  3. XML-based product information processing method for product design

    Science.gov (United States)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  4. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    DEFF Research Database (Denmark)

    Bohlin, Jon; Snipen, Lars; Cloeckaert, Axel

    2010-01-01

    , genomic codon and amino acid frequencies based comparisons) and proteomes (all-against-all BLAST protein comparisons and pan-genomic analyses). RESULTS: We found that the oligonucleotide based methods gave different results compared to that of the proteome based methods. Differences were also found...... than proteome comparisons between species in genus Brucella and genus Ochrobactrum. Pan-genomic analyses indicated that uptake of DNA from outside genus Brucella appears to be limited. CONCLUSIONS: While both the proteome based methods and the Markov chain based genomic signatures were able to reflect...

  5. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  6. Potential Energy Surfaces Using Algebraic Methods Based on Unitary Groups

    Directory of Open Access Journals (Sweden)

    Renato Lemus

    2011-01-01

    Full Text Available This contribution reviews the recent advances to estimate the potential energy surfaces through algebraic methods based on the unitary groups used to describe the molecular vibrational degrees of freedom. The basic idea is to introduce the unitary group approach in the context of the traditional approach, where the Hamiltonian is expanded in terms of coordinates and momenta. In the presentation of this paper, several representative molecular systems that permit to illustrate both the different algebraic approaches as well as the usual problems encountered in the vibrational description in terms of internal coordinates are presented. Methods based on coherent states are also discussed.

  7. Research of Stamp Forming Simulation Based on Finite Element Method

    Institute of Scientific and Technical Information of China (English)

    SU Xaio-ping; XU Lian

    2008-01-01

    We point out that the finite element method offers a greta functional improvement for analyzing the stamp forming process of an automobile panel. Using the finite element theory and the simulation method of sheet stamping forming, the element model of sheet forming is built based on software HyperMesh,and the simulation of the product's sheet forming process is analyzed based on software Dynaform. A series of simulation results are obtained. It is clear that the simulation results from the theoretical basis for the product's die design and are useful for selecting process parameters.

  8. CONSTRUCTION METHOD OF KNOWLEDGE MAP BASED ON DESIGN PROCESS

    Institute of Scientific and Technical Information of China (English)

    SU Hai; JIANG Zuhua

    2007-01-01

    Due to the increasing amount and complexity of knowledge in product design, the knowledge map based on design process is presented as a tool to reuse product design process, promote the product design knowledge sharing. The relationship between design task flow and knowledge flow is discussed; A knowledge organizing method based on design task decomposition and a visualization method to support the knowledge retrieving and sharing in product design are proposed. And a knowledge map system to manage the knowledge in product design process is built with Visual C++ and SVG. Finally, a brief case study is provided to illustrate the construction and application of knowledge map in fuel pump design.

  9. A Robust Digital Watermark Extracting Method Based on Neural Network

    Institute of Scientific and Technical Information of China (English)

    GUOLihua; YANGShutang; LIJianhua

    2003-01-01

    Since watermark removal software, such as StirMark, has succeeded in washing watermarks away for most of the known watermarking systems, it is necessary to improve the robustness of watermarking systems. A watermark extracting method based on the error Back propagation (BP) neural network is presented in this paper, which can efficiently improve the robustness of watermarking systems. Experiments show that even if the watermarking systems are attacked by the StirMark software, the extracting method based on neural network can still efficiently extract the whole watermark information.

  10. Method of infrared image enhancement based on histogram

    Institute of Scientific and Technical Information of China (English)

    WANG Liang; YAN Jie

    2011-01-01

    Aiming at the problem in infrared image enhancement, a new method is given based on histogram. Using the gray character- istics of target, the upper-bouod threshold is selected adaptively and the histogram is processed by the threshold. After choosing the gray transform function based on the gray level distribution of image, the gray transformation is done during histogram equalization. Finally, the enhanced image is obtained. Compared with histogram equalization (HE), histogram double equalization (HDE) and plateau histogram equalization (PE), the simulation results demonstrate that the image enhancement effect of this method has obvious superiority. At the same time, its operation speed is fast and real-time ability is excellent.

  11. Designing fuzzy inference system based on improved gradient descent method

    Institute of Scientific and Technical Information of China (English)

    Zhang Liquan; Shao Cheng

    2006-01-01

    The distribution of sampling data influences completeness of rule base so that extrapolating missing rules is very difficult. Based on data mining, a self-learning method is developed for identifying fuzzy model and extrapolating missing rules, by means of confidence measure and the improved gradient descent method. The proposed approach can not only identify fuzzy model, update its parameters and determine optimal output fuzzy sets simultaneously, but also resolve the uncontrollable problem led by the regions that data do not cover. The simulation results show the effectiveness and accuracy of the proposed approach with the classical truck backer-upper control problem verifying.

  12. How to Reach Evidence-Based Usability Evaluation Methods.

    Science.gov (United States)

    Marcilly, Romaric; Peute, Linda

    2017-01-01

    This paper discusses how and why to build evidence-based knowledge on usability evaluation methods. At each step of building evidence, requisites and difficulties to achieve it are highlighted. Specifically, the paper presents how usability evaluation studies should be designed to allow capitalizing evidence. Reciprocally, it presents how evidence-based usability knowledge will help improve usability practice. Finally, it underlines that evaluation and evidence participate in a virtuous circle that will help improve scientific knowledge and evaluation practice.

  13. Improved method for pulse sorting based on PRI transform

    Science.gov (United States)

    Ren, Chunhui; Cao, Junqing; Fu, Yusheng; Barner, Kenneth E.

    2014-06-01

    To solve the problem of pulse sorting in complex electromagnetic environment, we propose an improved method for pulse sorting through in-depth analysis of the PRI transform algorithm principle and the advantages and disadvantages in this paper. The method is based on the traditional PRI transform algorithm, using spectral analysis of PRI transform spectrum to estimate the PRI centre value of jitter signal. Simulation results indicate that, the improved sorting method overcome the shortcomings of the traditional PRI jitter separation algorithm which cannot effectively sort jitter pulse sequence, in addition to the advantages of simple and accurate.

  14. Optimal Route Selection Method Based on Vague Sets

    Institute of Scientific and Technical Information of China (English)

    GUO Rui; DU Li min; WANG Chun

    2015-01-01

    Optimal route selection is an important function of vehicle trac flow guidance system. Its core is to determine the index weight for measuring the route merits and to determine the evaluation method for selecting route. In this paper, subjective weighting method which relies on driver preference is used to determine the weight and the paper proposes the multi-criteria weighted decision method based on vague sets for selecting the optimal route. Examples show that, the usage of vague sets to describe route index value can provide more decision-making information for route selection.

  15. A online credit evaluation method based on AHP and SPA

    Science.gov (United States)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  16. A method for density estimation based on expectation identities

    Science.gov (United States)

    Peralta, Joaquín; Loyola, Claudia; Loguercio, Humberto; Davis, Sergio

    2017-06-01

    We present a simple and direct method for non-parametric estimation of a one-dimensional probability density, based on the application of the recent conjugate variables theorem. The method expands the logarithm of the probability density ln P(x|I) in terms of a complete basis and numerically solves for the coefficients of the expansion using a linear system of equations. No Monte Carlo sampling is needed. We present preliminary results that show the practical usefulness of the method for modeling statistical data.

  17. An Improved Minimum Distance Method Based on Artificial Neural Networks

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    MDM (minimum distance method) is a very popular algorithm in state recognition. But it has a presupposition, that is, the distance within one class must be shorter enough than the distance between classes. When this presupposition is not satisfied, the method is no longer valid. In order to overcome the shortcomings of MDM, an improved mi nimum distance method (IMDM) based on ANN (artificial neural networks) is presented. The simulation results demonstrate that IMDM has two advantages, that is, the rate of recognition is faster and the accuracy of recognition is higher compared with MDM.

  18. A Method of Attribute Reduction Based on Rough Set

    Institute of Scientific and Technical Information of China (English)

    LI Chang-biao; SONG Jian-ping

    2005-01-01

    The logging attribute optimization is an important task in the well-logging interpretation.A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.

  19. A Design Method of Business Application Framework Based on

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper discusses design and implementation method of BusinessAppl ication Framework based on software patterns, and then presents MVC pattern of a rchitecture and the method of dynamical update promulgation for Business Applica tion Framework. We discuss adaptation of Abstract Factory for the kern el functionality of Business Application Framework, such as data creation, manip ulation, composition, etc. It also presents class model and its class st ructure of Abstract Factory pattern. Finally, we briefly discuss the update, mod ification, and reconstruction method of Business Application Framework.

  20. A Clustering Method Based on the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Edwin Aldana-Bobadilla

    2015-01-01

    Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.

  1. Comparative study, based on metamodels, of methods for controlling performance

    Directory of Open Access Journals (Sweden)

    Aitouche Samia

    2012-05-01

    Full Text Available The continuing evolution of technology and human behavior puts the company in an uncertain and evolving environment. The company must be responsive and even proactive; therefore, control performance becomes increasingly difficult. Choosing the best method of ensuring control by the management policy of the company and its strategy is also a decision problem. The aim of this paper is the comparative study of three methods: the Balanced Scorecard, GIMSI and SKANDIAs NAVIGATOR for choosing the best method for ensuring the orderly following the policy of the company while maintaining its durability. Our work is divided into three parts. We firstly proposed original structural and kinetic metamodels for the three methods that allow an overall view of a method. Secondly, based on the three metamodels, we have drawn a generic comparison to analyze completeness of the method. Thirdly, we performed a restrictive comparison based on a restrictive set of criteria related to the same aspect example organizational learning, which is one of the bricks of knowledge management for a reconciliation to a proactive organization in an environment disturbed and uncertain, and the urgent needs. We note that we applied the three methods are applied in our precedent works. [1][23

  2. A Localization Method for Multistatic SAR Based on Convex Optimization.

    Directory of Open Access Journals (Sweden)

    Xuqi Zhong

    Full Text Available In traditional localization methods for Synthetic Aperture Radar (SAR, the bistatic range sum (BRS estimation and Doppler centroid estimation (DCE are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  3. A Localization Method for Multistatic SAR Based on Convex Optimization.

    Science.gov (United States)

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  4. Study on UPF Harmonic Current Detection Method Based on DSP

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, H J [Northwestern Polytechnical University, Xi' an 710072 (China); Pang, Y F [Xi' an University of Technology, Xi' an 710048 (China); Qiu, Z M [Xi' an University of Technology, Xi' an 710048 (China); Chen, M [Northwestern Polytechnical University, Xi' an 710072 (China)

    2006-10-15

    Unity power factor (UPF) harmonic current detection method applied to active power filter (APF) is presented in this paper. The intention of this method is to make nonlinear loads and active power filter in parallel to be an equivalent resistance. So after compensation, source current is sinusoidal, and has the same shape of source voltage. Meanwhile, there is no harmonic in source current, and the power factor becomes one. The mathematic model of proposed method and the optimum project for equivalent low pass filter in measurement are presented. Finally, the proposed detection method applied to a shunt active power filter experimental prototype based on DSP TMS320F2812 is developed. Simulation and experiment results indicate the method is simple and easy to implement, and can obtain the real-time calculation of harmonic current exactly.

  5. [Segmentation Method for Liver Organ Based on Image Sequence Context].

    Science.gov (United States)

    Zhang, Meiyun; Fang, Bin; Wang, Yi; Zhong, Nanchang

    2015-10-01

    In view of the problems of more artificial interventions and segmentation defects in existing two-dimensional segmentation methods and abnormal liver segmentation errors in three-dimensional segmentation methods, this paper presents a semi-automatic liver organ segmentation method based on the image sequence context. The method takes advantage of the existing similarity between the image sequence contexts of the prior knowledge of liver organs, and combines region growing and level set method to carry out semi-automatic segmentation of livers, along with the aid of a small amount of manual intervention to deal with liver mutation situations. The experiment results showed that the liver segmentation algorithm presented in this paper had a high precision, and a good segmentation effect on livers which have greater variability, and can meet clinical application demands quite well.

  6. Nonlinear fault diagnosis method based on kernel principal component analysis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu; Zhang Chunkai; Shao Huihe

    2005-01-01

    To ensure the system run under working order, detection and diagnosis of faults play an important role in industrial process. This paper proposed a nonlinear fault diagnosis method based on kernel principal component analysis (KPCA). In proposed method, using essential information of nonlinear system extracted by KPCA, we constructed KPCA model of nonlinear system under normal working condition. Then new data were projected onto the KPCA model. When new data are incompatible with the KPCA model, it can be concluded that the nonlinear system isout of normal working condition. Proposed method was applied to fault diagnosison rolling bearings. Simulation results show proposed method provides an effective method for fault detection and diagnosis of nonlinear system.

  7. SET OPERATOR-BASED METHOD OF DENOISING MEDICAL VOLUME DATA

    Institute of Scientific and Technical Information of China (English)

    程兵; 郑南宁; 袁泽剑

    2002-01-01

    Objective To investigate impulsive noise suppression of medical volume data. Methods The volume data is represented as level sets and a special set operator is defined and applied to filtering it. The small connected components, which are likely to be produced by impulsive noise, are eliminated after the filtering process. A fast algorithm that uses a heap data structure is also designed. Results Compared with traditional linear filters such as a Gaussian filter, this method preserves the fine structure features of the medical volume data while removing noise, and the fast algorithm developed by us reduces memory consumption and improves computing efficiency. The experimental results given illustrate the efficiency of the method and the fast algorithm. Conclusion The set operator-based method shows outstanding denoising properties in our experiment, especially for impulsive noise. The method has a wide variety of applications in the areas of volume visualization and high dimensional data processing.

  8. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    Science.gov (United States)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  9. Total-variation-based methods for gravitational wave denoising

    CERN Document Server

    Torres, Alejandro; Font, José A; Ibáñez, José M

    2014-01-01

    We describe new methods for denoising and detection of gravitational waves embedded in additive Gaussian noise. The methods are based on Total Variation denoising algorithms. These algorithms, which do not need any a priori information about the signals, have been originally developed and fully tested in the context of image processing. To illustrate the capabilities of our methods we apply them to two different types of numerically-simulated gravitational wave signals, namely bursts produced from the core collapse of rotating stars and waveforms from binary black hole mergers. We explore the parameter space of the methods to find the set of values best suited for denoising gravitational wave signals under different conditions such as waveform type and signal-to-noise ratio. Our results show that noise from gravitational wave signals can be successfully removed with our techniques, irrespective of the signal morphology or astrophysical origin. We also combine our methods with spectrograms and show how those c...

  10. Method of coating an iron-based article

    Science.gov (United States)

    Magdefrau, Neal; Beals, James T.; Sun, Ellen Y.; Yamanis, Jean

    2016-11-29

    A method of coating an iron-based article includes a first heating step of heating a substrate that includes an iron-based material in the presence of an aluminum source material and halide diffusion activator. The heating is conducted in a substantially non-oxidizing environment, to cause the formation of an aluminum-rich layer in the iron-based material. In a second heating step, the substrate that has the aluminum-rich layer is heated in an oxidizing environment to oxidize the aluminum in the aluminum-rich layer.

  11. Chaotic Encryption Method Based on Life-Like Cellular Automata

    CERN Document Server

    Machicao, Marina Jeaneth; Bruno, Odemir M

    2011-01-01

    We propose a chaotic encryption method based on Cellular Automata(CA), specifically on the family called the "Life-Like" type. Thus, the encryption process lying on the pseudo-random numbers generated (PRNG) by each CA's evolution, which transforms the password as the initial conditions to encrypt messages. Moreover, is explored the dynamical behavior of CA to reach a "good" quality as PRNG based on measures to quantify "how chaotic a dynamical system is", through the combination of the entropy, Lyapunov exponent, and Hamming distance. Finally, we present the detailed security analysis based on experimental tests: DIEHARD and ENT suites, as well as Fouriers Power Spectrum, used as a security criteria.

  12. Segmentation of pituitary adenoma: a graph-based method vs. a balloon inflation method.

    Science.gov (United States)

    Egger, Jan; Zukić, Dženan; Freisleben, Bernd; Kolb, Andreas; Nimsky, Christopher

    2013-06-01

    Among all abnormal growths inside the skull, the percentage of tumors in sellar region is approximately 10-15%, and the pituitary adenoma is the most common sellar lesion. A time-consuming process that can be shortened by using adequate algorithms is the manual segmentation of pituitary adenomas. In this contribution, two methods for pituitary adenoma segmentation in the human brain are presented and compared using magnetic resonance imaging (MRI) patient data from the clinical routine: Method A is a graph-based method that sets up a directed and weighted graph and performs a min-cut for optimal segmentation results: Method B is a balloon inflation method that uses balloon inflation forces to detect the pituitary adenoma boundaries. The ground truth of the pituitary adenoma boundaries - for the evaluation of the methods - are manually extracted by neurosurgeons. Comparison is done using the Dice Similarity Coefficient (DSC), a measure for spatial overlap of different segmentation results. The average DSC for all data sets is 77.5±4.5% for the graph-based method and 75.9±7.2% for the balloon inflation method showing no significant difference. The overall segmentation time of the implemented approaches was less than 4s - compared with a manual segmentation that took, on the average, 3.9±0.5min.

  13. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    Science.gov (United States)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  14. New methods for preparing mercury-based ferrofluids

    DEFF Research Database (Denmark)

    Linderoth, Søren; Rasmussen, L.H.; Mørup, Steen

    1991-01-01

    Metallic ferrofluids based on magnetic particles in mercury have been produced by two new methods. Alloy particles of Fe-B, Fe-Co-B, Fe-Ni-B, and Co-B were prepared by reduction of the transition metal ions in aqueous solutions by NaBH4 and subsequently suspended in mercury. In one preparation...

  15. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    HAZOP is the most commonly used process hazard analysis tool in industry, a systematic yet tedious and time consuming method. The aim of this study is to explore the feasibility of process dynamic simulations to facilitate the HAZOP studies. We propose a simulation-based methodology to complement...

  16. Agile Service Development: A Rule-Based Method Engineering Approach

    NARCIS (Netherlands)

    Hoppenbrouwers, Stijn; Zoet, Martijn; Versendaal, Johan; Weerd, Inge van de

    2011-01-01

    Agile software development has evolved into an increasingly mature software development approach and has been applied successfully in many software vendors’ development departments. In this position paper, we address the broader agile service development. Based on method engineering principles we de

  17. A Quantum-Based Similarity Method in Virtual Screening.

    Science.gov (United States)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2015-10-02

    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  18. A Quantum-Based Similarity Method in Virtual Screening

    Directory of Open Access Journals (Sweden)

    Mohammed Mumtaz Al-Dabbagh

    2015-10-01

    Full Text Available One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB. The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR, maximum unbiased validation (MUV and Directory of Useful Decoys (DUD data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  19. WAVELET BASED SPECTRAL CORRELATION METHOD FOR DPSK CHIP RATE ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Li Yingxiang; Xiao Xianci; Tai Hengming

    2004-01-01

    A wavelet-based spectral correlation algorithm to detect and estimate BPSK signal chip rate is proposed. Simulation results show that the proposed method can correctly estimate the BPSK signal chip rate, which may be corrupted by the quadratic characteristics of the spectral correlation function, in a low SNR environment.

  20. Effective Teaching Methods--Project-based Learning in Physics

    Science.gov (United States)

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  1. Reliability-Based Shape Optimization using Stochastic Finite Element Methods

    DEFF Research Database (Denmark)

    Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.

    1991-01-01

    Application of first-order reliability methods FORM (see Madsen, Krenk & Lind [8)) in structural design problems has attracted growing interest in recent years, see e.g. Frangopol [4), Murotsu, Kishi, Okada, Yonezawa & Taguchi [9) and Sørensen [14). In probabilistically based optimal design...

  2. pyro: Python-based tutorial for computational methods for hydrodynamics

    Science.gov (United States)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  3. Bead Collage: An Arts-Based Research Method

    Science.gov (United States)

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  4. Preparing Students for Flipped or Team-Based Learning Methods

    Science.gov (United States)

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  5. Homotopy-based methods for fractional differential equations

    NARCIS (Netherlands)

    Ateş, Inan

    2017-01-01

    The intention of this thesis is two-fold. The first aim is to describe and apply, series-based, numerical methods to fractional differential equation models. For this, it is needed to distinguish between space-fractional and time-fractional derivatives. The second goal of this thesis is to give a

  6. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  7. A Novel Image Fusion Method Based on FRFT-NSCT

    Directory of Open Access Journals (Sweden)

    Peiguang Wang

    2013-01-01

    fused image is obtained by performing the inverse NSCT and inverse FRFT on the combined coefficients. Three modes images and three fusion rules are demonstrated in the proposed algorithm test. The simulation results show that the proposed fusion approach is better than the methods based on NSCT at the same parameters.

  8. Graph-Based Methods for Discovery Browsing with Semantic Predications

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Fiszman, Marcelo; Miller, Christopher M;

    2011-01-01

    We present an extension to literature-based discovery that goes beyond making discoveries to a principled way of navigating through selected aspects of some biomedical domain. The method is a type of "discovery browsing" that guides the user through the research literature on a specified phenomen...

  9. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The ai

  10. Dynamic Frames Based Verification Method for Concurrent Java Programs

    NARCIS (Netherlands)

    Mostowski, Wojciech

    2016-01-01

    In this paper we discuss a verification method for concurrent Java programs based on the concept of dynamic frames. We build on our earlier work that proposes a new, symbolic permission system for concurrent reasoning and we provide the following new contributions. First, we describe our approach

  11. Explorations in Using Arts-Based Self-Study Methods

    Science.gov (United States)

    Samaras, Anastasia P.

    2010-01-01

    Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…

  12. A method to manage the model base in DSS

    Institute of Scientific and Technical Information of China (English)

    孙成双; 李桂君

    2004-01-01

    How to manage and use models in DSS is a most important subject. Generally, it costs a lot of money and time to develop the model base management system in the development of DSS and most are simple in function or cannot be used efficiently in practice. It is a very effective, applicable, and economical choice to make use of the interfaces of professional computer software to develop a model base management system. This paper presents the method of using MATLAB, a well-known statistics software, as the development platform of a model base management system. The main functional framework of a MATLAB-based model base managementsystem is discussed. Finally, in this paper, its feasible application is illustrated in the field of construction projects.

  13. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    Directory of Open Access Journals (Sweden)

    Takashi Tanaka

    2014-06-01

    Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  14. Liver 4DMRI: A retrospective image-based sorting method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano 20133 (Italy); Summers, Paul [Division of Radiology, Istituto Europeo di Oncologia, Milano 20133 (Italy); Bellomi, Massimo [Division of Radiology, Istituto Europeo di Oncologia, Milano 20133, Italy and Department of Health Sciences, Università di Milano, Milano 20133 (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia 27100 (Italy)

    2015-08-15

    Purpose: Four-dimensional magnetic resonance imaging (4DMRI) is an emerging technique in radiotherapy treatment planning for organ motion quantification. In this paper, the authors present a novel 4DMRI retrospective image-based sorting method, providing reduced motion artifacts than using a standard monodimensional external respiratory surrogate. Methods: Serial interleaved 2D multislice MRI data were acquired from 24 liver cases (6 volunteers + 18 patients) to test the proposed 4DMRI sorting. Image similarity based on mutual information was applied to automatically identify a stable reference phase and sort the image sequence retrospectively, without the use of additional image or surrogate data to describe breathing motion. Results: The image-based 4DMRI provided a smoother liver profile than that obtained from standard resorting based on an external surrogate. Reduced motion artifacts were observed in image-based 4DMRI datasets with a fitting error of the liver profile measuring 1.2 ± 0.9 mm (median ± interquartile range) vs 2.1 ± 1.7 mm of the standard method. Conclusions: The authors present a novel methodology to derive a patient-specific 4DMRI model to describe organ motion due to breathing, with improved image quality in 4D reconstruction.

  15. NIM: a node influence based method for cancer classification.

    Science.gov (United States)

    Wang, Yiwen; Yao, Min; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  16. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  17. Novel method for hybrid multiple attribute decision making based on TODIM method

    Institute of Scientific and Technical Information of China (English)

    Fang Wang; Hua Li

    2015-01-01

    The TODIM (an acronym in Portuguese for interac-tive and multiple attribute decision making) method is a valuable tool to solve the multiple attribute decision making (MADM) prob-lems considering the behavior of the decision maker (DM), while it cannot be used to handle the problem with unknown weight information on attributes. In this paper, a novel method based on the classical TODIM method is proposed to solve the hybrid MADM problems with unknown weight information on attributes, in which attribute values are represented in four different formats:crisp numbers, interval numbers, triangular fuzzy numbers and trapezoidal fuzzy numbers. Firstly, the positive-ideal alternative and negative-ideal alternative are determined, and the gain and loss matrices are constructed by calculating the gain and loss of each alternative relatived to the ideal alternatives concerning each attribute based on different distance calculation formulas, which may avoid the information missing or information distortion in the process of unifying multiform attribute values into a certain rep-resentation form. Secondly, an optimization model based on the maximizing deviation (MD) method, by which the attribute weights can be determined, is established for the TODIM method. Fur-ther, the calculation steps to solve the hybrid MADM problems are given. Final y, two numerical examples are presented to il us-trate the usefulness of the proposed method, and the results show that the DM’s psychological behavior, attribute weights and the transformed information would highly affect the ranking orders of alternatives.

  18. Fusion Method for Remote Sensing Image Based on Fuzzy Integral

    Directory of Open Access Journals (Sweden)

    Hui Zhou

    2014-01-01

    Full Text Available This paper presents a kind of image fusion method based on fuzzy integral, integrated spectral information, and 2 single factor indexes of spatial resolution in order to greatly retain spectral information and spatial resolution information in fusion of multispectral and high-resolution remote sensing images. Firstly, wavelet decomposition is carried out to two images, respectively, to obtain wavelet decomposition coefficients of the two image and keep coefficient of low frequency of multispectral image, and then optimized fusion is carried out to high frequency part of the two images based on weighting coefficient to generate new fusion image. Finally, evaluation is carried out to the image after fusion with introduction of evaluation indexes of correlation coefficient, mean value of image, standard deviation, distortion degree, information entropy, and so forth. The test results show that this method integrated multispectral information and space high-resolution information in a better way, and it is an effective fusion method of remote sensing image.

  19. Method of Infrared Image Enhancement Based on Stationary Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    QI Fei; LI Yan-jun; ZHANG Ke

    2008-01-01

    Aiming at the problem, i.e. infrared images own the characters of bad contrast ratio and fuzzy edges, a method to enhance the contrast of infrared image is given, which is based on stationary wavelet transform. After making stationary wavelet transform to an infrared image, denoising is done by the proposed method of double-threshold shrinkage in detail coefficient matrixes that have high noisy intensity. For the approximation coefficient matrix with low noisy intensity, enhancement is done by the proposed method based on histogram. The enhanced image can be got by wavelet coefficient reconstruction. Furthermore, an evaluation criterion of enhancement performance is introduced. The results show that this algorithm ensures target enhancement and restrains additive Gauss white noise effectively. At the same time, its amount of calculation is small and operation speed is fast.

  20. Pulsatile Drug Delivery System Based on Electrohydrodynamic Method

    CERN Document Server

    Zheng, Yi; Hu, Junqiang; Gao, Wenle

    2012-01-01

    Electrohydrodynamic (EHD) generation, a commonly used method in BioMEMS, plays a significant role in the pulsatile drug delivery system for a decade. In this paper, an EHD based drug delivery system is well designed, which can be used to generate a single drug droplet as small as 2.83 nL in 8.5 ms with a total device of 2\\times2\\times3 mm^3, and an external supplied voltage of 1500 V. Theoretically, we derive the expressions for the size and the formation time of a droplet generated by EHD method, while taking into account the drug supply rate, properties of liquid, gap between two electrodes, nozzle size, and charged droplet neutralization. This work proves a repeatable, stable and controllable droplet generation and delivery system based on EHD method experimentally as well as theoretically.

  1. Photonic arbitrary waveform generator based on Taylor synthesis method

    DEFF Research Database (Denmark)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji

    2016-01-01

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...

  2. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  3. Photonic arbitrary waveform generator based on Taylor synthesis method.

    Science.gov (United States)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji; Yan, Siqi; Wang, Xu; Zhang, Xinliang

    2016-10-17

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large dispersion, which are difficult to fabricate on chip. Our scheme is compact and capable for integration with electronics.

  4. Quantitative Analysis of Polarimetric Model-Based Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2016-11-01

    Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a

  5. [Galaxy/quasar classification based on nearest neighbor method].

    Science.gov (United States)

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  6. Efficient face recognition method based on DCT and LDA

    Institute of Scientific and Technical Information of China (English)

    张燕昆; 刘重庆

    2004-01-01

    It has been demonstrated that the linear discriminant analysis (LDA) is an effective approach in face recognition tasks. However, due to the high dimensionality of an image space, many LDA based approaches first use the principal component analysis (PCA) to project an image into a lower dimensional space, then perform the LDA transform to extract discriminant feature. But some useful discriminant information to the following LDA transform will be lost in the PCA step. To overcome these defects, a face recognition method based on the discrete cosine transform (DCT) and the LDA is proposed. First the DCT is used to achieve dimension reduction, then LDA transform is performed on the lower space to extract features. Two face databases are used to test our method and the correct recognition rates of 97.5 % and 96.0 % are obtained respectively. The performance of the proposed method is compared with that of the PCA + LDA method and the results show that the method proposed outperforms the PCA + LDA method.

  7. Fuzzy Critical Path Method Based on Lexicographic Ordering

    Directory of Open Access Journals (Sweden)

    Phani Bushan Rao P

    2012-01-01

    Full Text Available The Critical Path Method (CPM is useful for planning and control of complex projects. The CPM identifies the critical activities in the critical path of an activity network. The successful implementation of CPM requires the availability of clear determined time duration for each activity. However, in practical situations this requirement is usually hard to fulfil since many of activities will be executed for the first time. Hence, there is always uncertainty about the time durations of activities in the network planning.  This has led to the development of fuzzy CPM.  In this paper, we use a Lexicographic ordering method for ranking fuzzy numbers to a critical path method in a fuzzy project network, where the duration time of each activity is represented by a trapezoidal fuzzy number. The proposed method is compared with fuzzy CPM based on different ranking methods of fuzzy numbers. The comparison reveals that the method proposed in this paper is more effective in determining the activity criticalities and finding the critical path.   This new method is simple in calculating fuzzy critical path than many methods proposed so far in literature.  

  8. Direct trajectory optimization based on a mapped Chebyshev pseudospectral method

    Institute of Scientific and Technical Information of China (English)

    Guo Xiao; Zhu Ming

    2013-01-01

    In view of generating optimal trajectories of Bolza problems,standard Chebyshev pseudospectral (PS) method makes the points' accumulation near the extremities and rarefaction of nodes close to the center of interval,which causes an ill-condition of differentiation matrix and an oscillation of the optimal solution.For improvement upon the difficulties,a mapped Chebyshev pseudospectral method is proposed.A conformal map is applied to Chebyshev points to move the points closer to equidistant nodes.Condition number and spectral radius of differentiation matrices from both methods are presented to show the improvement.Furthermore,the modification keeps the Chebyshev pseudospectral method's advantage,the spectral convergence rate.Based on three numerical examples,a comparison of the execution time,convergence and accuracy is presented among the standard Chebyshev pseudospectral method,other collocation methods and the proposed one.In one example,the error of results from mapped Chebyshev pseudospectral method is reduced to 5% of that from standard Chebyshev pseudospectral method.

  9. Method for changing brightness temperature into true temperature based on twice recognition method

    Institute of Scientific and Technical Information of China (English)

    Yang Song; Xiaogang Sun; Hong Tang

    2007-01-01

    The channel output of a multi-wavelength pyrometer is the brightness temperature rather than the true temperature. Twice recognition method is put forward to change the brightness temperatures of a multiwavelength pyrometer into the true temperatures of targets. Using the data offered by Dr. F. Righini,the experimental results show that the difference between the calculated true temperature based on twice recognition method and the real true temperature is within ±20 K. The method presented in this paper is feasible and effective for the true temperature measurement of targets.

  10. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  11. Assessment of Soil Liquefaction Potential Based on Numerical Method

    DEFF Research Database (Denmark)

    Choobasti, A. Janalizadeh; Vahdatirad, Mohammad Javad; Torabi, M.

    2012-01-01

    simplified method have been developed over the years. Although simplified methods are available in calculating the liquefaction potential of a soil deposit and shear stresses induced at any point in the ground due to earthquake loading, these methods cannot be applied to all earthquakes with the same...... accuracy, also they lack the potential to predict the pore pressure developed in the soil. Therefore, it is necessary to carry out a ground response analysis to obtain pore pressures and shear stresses in the soil due to earthquake loading. Using soil historical, geological and compositional criteria......, a zone of the corridor of Tabriz urban railway line 2 susceptible to liquefaction was recognized. Then, using numerical analysis and cyclic stress method using QUAKE/W finite element code, soil liquefaction potential in susceptible zone was evaluated based on design earthquake....

  12. New method for diagnosing cast compactness based on laser ultrasonography

    Directory of Open Access Journals (Sweden)

    P. Swornowski

    2010-01-01

    Full Text Available Technologically advanced materials, such as alloys of aluminum, nickel or titanium are currently used increasingly often in significantly loaded components utilized in the aviation industry, among others in the construction of jet turbine engine blades. The article presents a method for diagnosing the condition of the inside of cast blades with the use of laser ultrasonography. The inspection is based on finding hidden flaws with a size of between 10 and 30μm. Laser ultrasonography offers a number of improvements over the non-destructive methods used so far, e.g. the possibility to diagnose the cast on a selected depth, high signal-to-noise ratio and good sensitivity. The article includes a brief overview of non-destructive inspection methods used in foundry engineering and sample results of inspecting the inner structure of a turbo jet engine blade using the method described in the article.

  13. AN IMPROVED RADIAL BASIS FUNCTION BASED METHOD FOR IMAGE WARPING

    Institute of Scientific and Technical Information of China (English)

    Nie Xuan; Zhao Rongchun; Zhang Cheng; Zhang Xiaoyan

    2005-01-01

    A new image warping method is proposed in this letter, which can warp a given image by some manual defined features. Based on the radial basis interpolation function algorithm, the proposed method can transform the original optimized problem into nonsingular linear problem by adding one-order term and affine differentiable condition. This linear system can get the steady unique solution by choosing suitable kernel function. Furthermore, the proposed method demonstrates how to set up the radial basis function in the target image so as to achieve supports to adopt the backward re-sampling technology accordingly which could gain the very slippery warping image. Theexperimental result shows that the proposed method can implement smooth and gradual image warping with multi-anchor points' accurate interpolation.

  14. About Classification Methods Based on Tensor Modelling for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Salah Bourennane

    2010-03-01

    Full Text Available Denoising and Dimensionality Reduction (DR are key issue to improve the classifiers efficiency for Hyper spectral images (HSI. The multi-way Wiener filtering recently developed is used, Principal and independent component analysis (PCA; ICA and projection pursuit(PP approaches to DR have been investigated. These matrix algebra methods are applied on vectorized images. Thereof, the spatial rearrangement is lost. To jointly take advantage of the spatial and spectral information, HSI has been recently represented as tensor. Offering multiple ways to decompose data orthogonally, we introduced filtering and DR methods based on multilinear algebra tools. The DR is performed on spectral way using PCA, or PP joint to an orthogonal projection onto a lower subspace dimension of the spatial ways. Weshow the classification improvement using the introduced methods in function to existing methods. This experiment is exemplified using real-world HYDICE data. Multi-way filtering, Dimensionality reduction, matrix and multilinear algebra tools, tensor processing.

  15. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  16. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang

    2012-03-16

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  17. An Improved Image Segmentation Algorithm Based on MET Method

    Directory of Open Access Journals (Sweden)

    Z. A. Abo-Eleneen

    2012-09-01

    Full Text Available Image segmentation is a basic component of many computer vision systems and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, Kittler and Illingworth's minimum error thresholding (MET, improves the image segmentation effect obviously. Its simpler and easier to implement. However, it fails in the presence of skew and heavy-tailed class-conditional distributions or if the histogram is unimodal or close to unimodal. The Fisher information (FI measure is an important concept in statistical estimation theory and information theory. Employing the FI measure, an improved threshold image segmentation algorithm FI-based extension of MET is developed. Comparing with the MET method, the improved method in general can achieve more robust performance when the data for either class is skew and heavy-tailed.

  18. Power Network Parameter Estimation Method Based on Data Mining Technology

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian

    2008-01-01

    The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.

  19. Multimineral optimization processing method based on elemental capture spectroscopy logging

    Institute of Scientific and Technical Information of China (English)

    Feng Zhou; Li Xin-Tong; Wu Hong-Liang; Xia Shou-Ji; Liu Ying-Ming

    2014-01-01

    Calculating the mineral composition is a critical task in log interpretation. Elemental capture spectroscopy (ECS) log provides the weight percentages of twelve common elements, which lays the foundation for the accurate calculation of mineral compositions. Previous processing methods calculated the formation composition via the conversion relation between the formation chemistry and minerals. Thus, their applicability is limited and the method precision is relatively low. In this study, we present a multimineral optimization processing method based on the ECS log. We derived the ECS response equations for calculating the formation composition, then, determined the logging response values for the elements of common minerals using core data and theoretical calculations. Finally, a software module was developed. The results of the new method are consistent with core data and the mean absolute error is less than 10%.

  20. New Precision Guidance Method Based on Bistatic Synthetic Apterture Radar

    Institute of Scientific and Technical Information of China (English)

    YANG Chun; ZENG Tao

    2008-01-01

    A new method is presented to improve guidance precision.This method is based on histatic synthetic aperture radar.The illuminator works in side looking mode,providing the synthetic aperture and the receiver is disposed on the seeker which operates in the forward looking mode.The receiving antenna is composed of four sub-antennas and so four synthetic aperture radar(SAR)images are to be generated.Target is positioned in SAR images by image matching.The bearing and elevation of image element of target are measured by the principle of monopulse angular measurement.Theory of the proposed method is derived and simulation on bearing measurement is done.Simulation shows that the method is valid and if SNR of target's image is above 30 dB.the angular measuring difference is witbin the confines of 0.04 degree.

  1. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  2. Spindle extraction method for ISAR image based on Radon transform

    Science.gov (United States)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  3. A novel robot visual homing method based on SIFT features.

    Science.gov (United States)

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-10-14

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method.

  4. Steganalytic method based on short and repeated sequence distance statistics

    Institute of Scientific and Technical Information of China (English)

    WANG GuoXin; PING XiJian; XU ManKun; ZHANG Tao; BAO XiRui

    2008-01-01

    According to the distribution characteristics of short and repeated sequence (SRS),a steganalytic method based on the correlation of image bit planes is proposed.Firstly,we provide the conception of SRS distance statistics and deduce its statistical distribution.Because the SRS distance statistics can effectively reflect the correlation of the sequence,SRS has statistical features when the image bit plane sequence equals the image width.Using this characteristic,the steganalytic method is fulfilled by the distinct test of Poisson distribution.Experimental results show a good performance for detecting LSB matching steganographic method in still images.By the way,the proposed method is not designed for specific steganographic algorithms and has good generality.

  5. Topography measurement of micro structure by modulation-based method

    Science.gov (United States)

    Zhou, Yi; Tang, Yan; Liu, Junbo; Deng, Qinyuan; Cheng, Yiguang; Hu, Song

    2016-10-01

    Dimensional metrology for micro structure plays an important role in addressing quality issues and observing the performance of micro-fabricated products. Different from the traditional white-light interferometry approach, the modulation-based method is expected to measure topography of micro structure by the obtained modulation of each interferometry image. Through seeking the maximum modulation of every pixel respectively in Z direction, the method could obtain the corresponding height of individual pixel and finally get topography of the structure. Owing to the characteristic of modulation, the proposed method which is not influenced by the change of background light intensity caused by instable light source and different reflection index of the structure could be widely applied with high stability. The paper both illustrates the principle of this novel method and conducts the experiment to verify the feasibility.

  6. Moving sound source localization based on triangulation method

    Science.gov (United States)

    Miao, Feng; Yang, Diange; Wen, Junjie; Lian, Xiaomin

    2016-12-01

    This study develops a sound source localization method that extends traditional triangulation to moving sources. First, the possible sound source locating plane is scanned. Secondly, for each hypothetical source location in this possible plane, the Doppler effect is removed through the integration of sound pressure. Taking advantage of the de-Dopplerized signals, the moving time difference of arrival (MTDOA) is calculated, and the sound source is located based on triangulation. Thirdly, the estimated sound source location is compared to the original hypothetical location and the deviations are recorded. Because the real sound source location leads to zero deviation, the sound source can be finally located by minimizing the deviation matrix. Simulations have shown the superiority of MTDOA method over traditional triangulation in case of moving sound sources. The MTDOA method can be used to locate moving sound sources with as high resolution as DAMAS beamforming, as shown in the experiments, offering thus a new method for locating moving sound sources.

  7. Similarity theory based method for MEMS dynamics analysis

    Institute of Scientific and Technical Information of China (English)

    LI Gui-xian; PENG Yun-feng; ZHANG Xin

    2008-01-01

    A new method for MEMS dynamics analysis is presented, ased on the similarity theory. With this method, two systems' similarities can be captured in terms of physics quantities/governed-equations amongst different energy fields, and then the unknown dynamic characteristics of one of the systems can be analyzed ac-cording to the similar ones of the other system. The probability to establish a pair of similar systems among MEMS and other energy systems is also discussed based on the equivalent between mechanics and electrics, and then the feasibility of applying this method is proven by an example, in which the squeezed damping force in MEMS and the current of its equivalent circuit established by this method are compared.

  8. [Fast Implementation Method of Protein Spots Detection Based on CUDA].

    Science.gov (United States)

    Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong

    2016-02-01

    In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms.

  9. A dynamic fuzzy clustering method based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yan; ZHOU Chunguang; LIANG Yanchun; GUO Dongwei

    2003-01-01

    A dynamic fuzzy clustering method is presented based on the genetic algorithm. By calculating the fuzzy dissimilarity between samples the essential associations among samples are modeled factually. The fuzzy dissimilarity between two samples is mapped into their Euclidean distance, that is, the high dimensional samples are mapped into the two-dimensional plane. The mapping is optimized globally by the genetic algorithm, which adjusts the coordinates of each sample, and thus the Euclidean distance, to approximate to the fuzzy dissimilarity between samples gradually. A key advantage of the proposed method is that the clustering is independent of the space distribution of input samples, which improves the flexibility and visualization. This method possesses characteristics of a faster convergence rate and more exact clustering than some typical clustering algorithms. Simulated experiments show the feasibility and availability of the proposed method.

  10. Measurement-based method for verifying quantum discord

    Science.gov (United States)

    Rahimi-Keshari, Saleh; Caves, Carlton M.; Ralph, Timothy C.

    2013-01-01

    We introduce a measurement-based method for verifying quantum discord of any bipartite quantum system. We show that by performing an informationally complete positive operator valued measurement (IC-POVM) on one subsystem and checking the commutativity of the conditional states of the other subsystem, quantum discord from the second subsystem to the first can be verified. This is an improvement upon previous methods, which enables us to efficiently apply our method to continuous-variable systems, as IC-POVM's are readily available from homodyne or heterodyne measurements. We show that quantum discord for Gaussian states can be verified by checking whether the peaks of the conditional Wigner functions corresponding to two different outcomes of heterodyne measurement coincide at the same point in the phase space. Using this method, we also prove that the only Gaussian states with zero discord are product states; hence, Gaussian states with Gaussian discord have nonzero quantum discord.

  11. A dynamic performance evaluation method based on SD-BSC

    Institute of Scientific and Technical Information of China (English)

    TENG Chun-xian; PAN Xiao-dong; HU Xian-wu

    2007-01-01

    Taking into consideration the disadvantage of Balanced Scorecard (BSC) not being able to reflect time delay, nonlinear problems of causal relationship and being lack of effective simulation, we combined it with the characteristics of the System Dynamics (SD). Basing on the background of manufacturing enterprises,through SD integrating with the BSC we established a new performance evaluation method-SD-BSC method to overcome the disadvantage of BSC. A performance evaluation model of SD-BSC is provided and the simulation results are analyzed which show different production policies will lead to different customer's satisfaction degrees. The SD-BSC dynamic performance evaluation method can reflect dynamic, complex causal feedback relationship and time delay, so it compensates for the disadvantage of traditional financial performance evaluation method, and at the same time makes the BSC perfect.

  12. Numerical method of slope failure probability based on Bishop model

    Institute of Scientific and Technical Information of China (English)

    SU Yong-hua; ZHAO Ming-hua; ZHANG Yue-ying

    2008-01-01

    Based on Bishop's model and by applying the first and second order mean deviations method, an approximative solution method for the first and second order partial derivatives of functional function was deduced according to numerical analysis theory. After complicated multi-independent variables implicit functional function was simplified to be a single independent variable implicit function and rule of calculating derivative for composite function was combined with principle of the mean deviations method, an approximative solution format of implicit functional function was established through Taylor expansion series and iterative solution approach of reliability degree index was given synchronously. An engineering example was analyzed by the method. The result shows its absolute error is only 0.78% as compared with accurate solution.

  13. Orientation sampling for dictionary-based diffraction pattern indexing methods

    Science.gov (United States)

    Singh, S.; De Graef, M.

    2016-12-01

    A general framework for dictionary-based indexing of diffraction patterns is presented. A uniform sampling method of orientation space using the cubochoric representation is introduced and used to derive an empirical relation between the average disorientation between neighboring sampling points and the number of grid points sampled along the semi-edge of the cubochoric cube. A method to uniformly sample misorientation iso-surfaces is also presented. This method is used to show that the dot product serves as a proxy for misorientation. Furthermore, it is shown that misorientation iso-surfaces in Rodrigues space are quadractic surfaces. Finally, using the concept of Riesz energies, it is shown that the sampling method results in a near optimal covering of orientation space.

  14. Object Recognition using Feature- and Color-Based Methods

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  15. Weaving a Formal Methods Education with Problem-Based Learning

    Science.gov (United States)

    Gibson, J. Paul

    The idea of weaving formal methods through computing (or software engineering) degrees is not a new one. However, there has been little success in developing and implementing such a curriculum. Formal methods continue to be taught as stand-alone modules and students, in general, fail to see how fundamental these methods are to the engineering of software. A major problem is one of motivation — how can the students be expected to enthusiastically embrace a challenging subject when the learning benefits, beyond passing an exam and achieving curriculum credits, are not clear? Problem-based learning has gradually moved from being an innovative pedagogique technique, commonly used to better-motivate students, to being widely adopted in the teaching of many different disciplines, including computer science and software engineering. Our experience shows that a good problem can be re-used throughout a student's academic life. In fact, the best computing problems can be used with children (young and old), undergraduates and postgraduates. In this paper we present a process for weaving formal methods through a University curriculum that is founded on the application of problem-based learning and a library of good software engineering problems, where students learn about formal methods without sitting a traditional formal methods module. The process of constructing good problems and integrating them into the curriculum is shown to be analagous to the process of engineering software. This approach is not intended to replace more traditional formal methods modules: it will better prepare students for such specialised modules and ensure that all students have an understanding and appreciation for formal methods even if they do not go on to specialise in them.

  16. An Efficient Frequency Recognition Method Based on Likelihood Ratio Test for SSVEP-Based BCI

    Directory of Open Access Journals (Sweden)

    Yangsong Zhang

    2014-01-01

    Full Text Available An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR. To address this aspect, for the first time, likelihood ratio test (LRT was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA- based method and the least absolute shrinkage and selection operator- (LASSO- based method. The recognition accuracy and information transfer rate (ITR obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI.

  17. Training Methods to Improve Evidence-Based Medicine Skills

    Directory of Open Access Journals (Sweden)

    Filiz Ozyigit

    2010-06-01

    Full Text Available Evidence based medicine (EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It is estimated that only 15% of medical interventions is evidence-based. Increasing demand, new technological developments, malpractice legislations, a very speed increase in knowledge and knowledge sources push the physicians forward for EBM, but at the same time increase load of physicians by giving them the responsibility to improve their skills. Clinical maneuvers are needed more, as the number of clinical trials and observational studies increase. However, many of the physicians, who are in front row of patient care do not use this increasing evidence. There are several examples related to different training methods in order to improve skills of physicians for evidence based practice. There are many training methods to improve EBM skills and these trainings might be given during medical school, during residency or as continuous trainings to the actual practitioners in the field. It is important to discuss these different training methods in our country as well and encourage dissemination of feasible and effective methods. [TAF Prev Med Bull 2010; 9(3.000: 245-254

  18. Airworthiness Compliance Verification Method Based on Simulation of Complex System

    Institute of Scientific and Technical Information of China (English)

    XU Haojun; LIU Dongliang; XUE Yuan; ZHOU Li; MIN Guilong

    2012-01-01

    A study is conducted on a new airworthiness compliance verification method based on pilot-aircraft-environment complex system simulation.Verification scenarios are established by “block diagram” method based on airworthiness criteria..A pilot-aircraft-environment complex model is set up and a virtual flight testing method based on connection of MATLAB/Simulink and Flightgear is proposed.Special researches are conducted on the modeling of pilot manipulation stochastic parameters and manipulation in critical situation.Unfavorable flight factors of certain scenario are analyzed,and reliability modeling of important system is researched.A distribution function of small probability event and the theory on risk probability measurement are studied.Nonlinear function is used to depict the relationship between the cumulative probability and the extremum of the critical parameter.A synthetic evaluation model is set up,modified genetic algorithm (MGA) is applied to ascertaining the distribution parameter in the model,and amore reasonable result is obtained.A clause about vehicle control functions (VCFs) verification in MIL-HDBK-516B is selected as an example to validate the practicability of the method.

  19. An Object-Based Method for Chinese Landform Types Classification

    Science.gov (United States)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  20. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  1. Application of the VOF method based on unstructured quadrilateral mesh

    Institute of Scientific and Technical Information of China (English)

    JI Chun-ning; SHI Ying

    2008-01-01

    To simulate two-dimensional free-surface flows with complex boundaries directly and accurately, a novel VOF (Volume-of-fluid) method based on unstructured quadrilateral mesh is presented. Without introducing any complicated boundary treatment or artificial diffusion, this method treated curved boundaries directly by utilizing the inherent merit of unstructured mesh in fitting curves. The PLIC (Piecewise Linear Interface Calculation) method was adopted to obtain a second-order accurate linearized reconstruction approximation and the MLER (Modified Lagrangian-Eulerian Re-map) method was introduced to advect fluid volumes on unstructured mesh. Moreover, an analytical relation for the interface's line constant vs. the volume clipped by the interface was developed so as to improve the method's efficiency. To validate this method, a comprehensive series of large straining advection tests were performed. Numerical results provide convincing evidences for the method's high volume conservative accuracy and second-order shape error convergence rate. Also, a dramatic improvement on computational accuracy over its unstructured triangular mesh counterpart is checked.

  2. Estimation of pump operational state with model-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)

    2010-06-15

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)

  3. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang

    2011-03-01

    We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.

  4. A decomposition method based on a model of continuous change.

    Science.gov (United States)

    Horiuchi, Shiro; Wilmoth, John R; Pletcher, Scott D

    2008-11-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period.

  5. Development of redesign method of production system based on QFD

    Science.gov (United States)

    Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi

    In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.

  6. CEMS using hot wet extractive method based on DOAS

    Science.gov (United States)

    Sun, Bo; Zhang, Chi; Sun, Changku

    2011-11-01

    A continuous emission monitoring system (CEMS) using hot wet extractive method based on differential optical absorption spectroscopy (DOAS) is designed. The developed system is applied to retrieving the concentration of SO2 and NOx in flue gas on-site. The flue gas is carried along a heated sample line into the sample pool at a constant temperature above the dew point. In this case, the adverse impact of water vapor on measurement accuracy is reduced greatly, and the on-line calibration is implemented. And then the flue gas is discharged from the sample pool after the measuring process is complete. The on-site applicability of the system is enhanced by using Programmable Logic Controller (PLC) to control each valve in the system during the measuring and on-line calibration process. The concentration retrieving method used in the system is based on the partial least squares (PLS) regression nonlinear method. The relationship between the known concentration and the differential absorption feature gathered by the PLS nonlinear method can be figured out after the on-line calibration process. Then the concentration measurement of SO2 and NOx can be easily implemented according to the definite relationship. The concentration retrieving method can identify the information and noise effectively, which improves the measuring accuracy of the system. SO2 with four different concentrations are measured by the system under laboratory conditions. The results proved that the full-scale error of this system is less than 2%FS.

  7. An improved Bayesian matting method based on image statistic characteristics

    Science.gov (United States)

    Sun, Wei; Luo, Siwei; Wu, Lina

    2015-03-01

    Image matting is an important task in image and video editing and has been studied for more than 30 years. In this paper we propose an improved interactive matting method. Starting from a coarse user-guided trimap, we first perform a color estimation based on texture and color information and use the result to refine the original trimap. Then with the new trimap, we apply soft matting process which is improved Bayesian matting with smoothness constraints. Experimental results on natural image show that this method is useful, especially for the images have similar texture feature in the background or the images which is hard to give a precise trimap.

  8. Compression method based on training dataset of SVM

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The method to compress the training dataset of Support Vector Machine (SVM) based on the character of the Support Vector Machine is proposed.First,the distance between the unit in two training datasets,and then the samples that keep away from hyper-plane are discarded in order to compress the training dataset.The time spent in training SVM with the training dataset compressed by the method is shortened obviously.The result of the experiment shows that the algorithm is effective.

  9. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    HAZOP is the most commonly used process hazard analysis tool in industry, a systematic yet tedious and time consuming method. The aim of this study is to explore the feasibility of process dynamic simulations to facilitate the HAZOP studies. We propose a simulation-based methodology to complement...... the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...... model as case study....

  10. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  11. A NEW LBG-BASED IMAGE COMPRESSION METHOD USING DCT

    Institute of Scientific and Technical Information of China (English)

    Jiang Lai; Huang Cailing; Liao Huilian; Ji Zhen

    2006-01-01

    In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then each block is subsequently encoded by a 2D DCT coding scheme. The dimension of vectors as the input of a generalized VQ scheme is reduced. The time of encoding by a generalized VQ is reduced with the introduction of DCT process. The experimental results demonstrate the efficiency of the proposed method.

  12. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  13. Star pattern recognition method based on neural network

    Institute of Scientific and Technical Information of China (English)

    LI Chunyan; LI Ke; ZHANG Longyun; JIN Shengzhen; ZU Jifeng

    2003-01-01

    Star sensor is an avionics instrument used to provide the absolute 3-axis attitude of a spacecraft by utilizing star observations. The key function is to recognize the observed stars by comparing them with the reference catalogue. Autonomous star pattern recognition requires that similar patterns can be distinguished from each other with a small training set. Therefore, a new method based on neural network technology is proposed and a recognition system containing parallel backpropagation (BP) multi-subnets is designed. The simulation results show that the method performs much better than traditional algorithms and the proposed system can achieve both higher recognition accuracy and faster recognition speed.

  14. a SAR Image Registration Method Based on Sift Algorithm

    Science.gov (United States)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  15. Iterative-decreasing calibration method based on regional circle

    Science.gov (United States)

    Zhao, Hongyang

    2017-07-01

    In the field of computer vision, camera calibration is a hot issue. For the existing coupled problem of calculating distortion center and the distortion factor in the process of camera calibration, this paper presents an iterative-decreasing calibration method based on regional circle, uses the local area of the circle plate to calculate the distortion center coordinates by iterative declining, and then uses the distortion center to calculate the local area calibration factors. Finally, makes distortion center and the distortion factor for the global optimization. The calibration results show that the proposed method has high calibration accuracy.

  16. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... of the framework. The issue of commercial simulators or software providing the necessary features for product-process synthesis-design as opposed to their development by the academic PSE community will also be discussed. An example of a successful collaboration between academia-industry for the development...

  17. Methods for preparing colloidal nanocrystal-based thin films

    Energy Technology Data Exchange (ETDEWEB)

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  18. Improved image fusion method based on NSCT and accelerated NMF.

    Science.gov (United States)

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  19. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    Directory of Open Access Journals (Sweden)

    Mingdong Li

    2012-05-01

    Full Text Available In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT domain and an Accelerated Non-negative Matrix Factorization (ANMF-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  20. An Extended Role Based Access Control Method for XML Documents

    Institute of Scientific and Technical Information of China (English)

    MENG Xiao-feng; LUO Dao-feng; OU Jian-bo

    2004-01-01

    As XML has been increasingly important as the Data-change format of Internet and Intranet, access-control-on-XML-properties rises as a new issue.Role-based access control (RBAC) is an access control method that has been widely used in Internet, Operation System and Relation Data Base these 10 years.Though RBAC is already relatively mature in the above fields, new problems occur when it is used in XML properties.This paper proposes an integrated model to resolve these problems, after the fully analysis on the features of XML and RBAC.

  1. Register-based statistics statistical methods for administrative data

    CERN Document Server

    Wallgren, Anders

    2014-01-01

    This book provides a comprehensive and up to date treatment of  theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi

  2. Construct Method of Predicting Satisfaction Model Based on Technical Characteristics

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-an; DENG Qian; SUN Guan-long; ZHANG Wei-she

    2011-01-01

    In order to construct objective relatively mapping relationship model between customer requirements and product technical characteristics, a novel approach based on customer satisfactions information digging from case products and satisfaction information of expert technical characteristics was put forward in this paper. Technical characteristics evaluation values were expressed by rough number, and technical characteristics target sequence was determined on the basis of efficiency, cost type and middle type in this method. Use each calculated satisfactions of customers and technical characteristics as input and output elements to construct BP network model. And we use MATLAB software to simulate this BP network model based on the case of electric bicycles.

  3. Realizable Hardware-Based Method for Digital Modulation Classification

    Institute of Scientific and Technical Information of China (English)

    HAN Li; WAN Jin-bo

    2005-01-01

    A new method suited for hardware implementation is developed to classify 8 different digital modulation types with raised cosine base-band impulse without knowing the carrier frequency and symbol timing. The normalized histogram of stagnation points for instantaneous parameters is used to recognize both ideal rectangular and raised cosine base-band digital signals. Carrier frequency estimation is used to enhance the recognition rate of phase-modulated signals. In the condition of 10 dB signal noise ratio (SNR), the recognizing rate is over 80 %. The new algorithm is suited for hardware implementation.

  4. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    OpenAIRE

    Mingdong Li; Siyu Lai; Juan Wang

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can ...

  5. Test method on infrared system range based on space compression

    Science.gov (United States)

    Chen, Zhen-xing; Shi, Sheng-bing; Han, Fu-li

    2016-09-01

    Infrared thermal imaging system generates image based on infrared radiation difference between object and background and is a passive work mode. Range is important performance and necessary appraised test item in appraisal test for infrared system. In this paper, aim is carrying out infrared system range test in laboratory , simulated test ground is designed based on object equivalent, background analog, object characteristic control, air attenuation characteristic, infrared jamming analog and so on, repeatable and controllable tests are finished, problem of traditional field test method is solved.

  6. Geodemographic regional system research: information provision and methodical bases

    Directory of Open Access Journals (Sweden)

    Катерина Сегіда

    2017-09-01

    Full Text Available The most reasonable and prudent method to consider the region’s population as a regional geodemographic system and its features lies in its inseparable connection with the general process of regional soсiogeosystem’s development that takes into account many factors including regional development, which influence the geodemographic processes in the region. Geodemographic system is treated as a functional component in soсigeosystem at the heart of social and geographical concepts. Geodemographic system of the region is based on five methodological approaches: geographical, systemic, synergistic, historical and informational. The authors elucidate methodological features of geodemographic study of the region from the perspective of human geography. These methodological approaches to study basic techniques and application features are disclosed in the content. Among the scientific methods we consider probabilistic and statistical methods, methods of organizing, summarizing, comparing, system analysis, modeling. Among specific methods are mapping method, IFV modeling, development trajectory modeling, component analysis of the initial vector, other methods of multivariate analysis and GIS technology. The special research methods of geodemographic system include a system of techniques to study the characteristics of population distribution in the territory, and a number of demographic and geodemographic methods and techniques to establish the demographic characteristics of the territories. The geodemographic software research system of the region is presented; the list of statistical indicators for information database is mentioned. The demographic index groups to characterize the system, including vital factors of speed, intensity of demographic processes, reproduction and structural factors are defined. The key demographic factors and coefficients are presented both in absolute,and relative units. Database features to implement social and

  7. Hybrid Perturbation methods based on Statistical Time Series models

    CERN Document Server

    San-Juan, Juan Félix; Pérez, Iván; López, Rosario

    2016-01-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of a...

  8. Improvement of the relative entropy based protein folding method

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The "relative entropy" has been used as a minimization function to predict the tertiary structure of a protein backbone, and good results have been obtained. However, in our previous work, the ensemble average of the contact potential was estimated by an approximate calculation. In order to improve the theoretical integrity of the relative-entropy-based method, a new theoretical calculation method of the ensemble average of the contact potential was presented in this work, which is based on the thermodynamic perturbation theory. Tests of the improved algorithm were performed on twelve small proteins. The root mean square deviations of the predicted versus the native structures from Protein Data Bank range from 0.40 to 0.60 nm. Compared with the previous approximate values, the average prediction accuracy is improved by 0.04 nm.

  9. Improvement of the relative entropy based protein folding method

    Institute of Scientific and Technical Information of China (English)

    QI LiSheng; SU JiGuo; CHEN WeiZu; WANG CunXin

    2009-01-01

    The "relative entropy" has been used as a minimization function to predict the tertiary structure of a protein backbone, and good results have been obtained. However, in our previous work, the ensemble average of the contact potential was estimated by an approximate calculation. In order to improve the theoretical integrity of the relative-entropy-based method, a new theoretical calculation method of the ensemble average of the contact potential was presented in this work, which is based on the thermodynamic perturbation theory. Testa of the improved algorithm were performed on twelve small proteins. The root mean square deviations of the predicted versus the native structures from Protein Data Bank range from 0.40 to 0.60 nm. Compared with the previous approximate values, the average prediction accuracy is improved by 0.04 nm.

  10. Curvelet Transform-Based Denoising Method for Doppler Frequency Extraction

    Institute of Scientific and Technical Information of China (English)

    HOU Shu-juan; WU Si-liang

    2007-01-01

    A novel image denoising method based on curvelet transform is proposed in order to improve the performance of Doppler frequency extraction in low signal-noise-ratio (SNR) environment. The echo can be represented as a gray image with spectral intensity as its gray values by time-frequency transform. And the curvelet coefficients of the image are computed. Then an adaptive soft-threshold scheme based on dual-median operation is implemented in curvelet domain. After that, the image is reconstructed by inverse curvelet transform and the Doppler curve is extracted by a curve detection scheme. Experimental results show the proposed method can improve the detection of Doppler frequency in low SNR environment.

  11. Mutton Traceability Method Based on Internet of Things

    Directory of Open Access Journals (Sweden)

    Wu Min-Ning

    2014-01-01

    Full Text Available In order to improve the mutton traceability efficiency for Internet of Things and solve the problem of data transmission, analyzed existing tracking algorithm, proposed the food traceability application model, Petri network model of food traceability and food traceability of time series data of improved K-means algorithm based on the Internet of things. The food traceability application model to convert, integrate and mine the heterogeneous information, implementation of the food safety traceability information management, Petri network model for food traceability in the process of the state transition were analyzed and simulated and provides a theoretical basis to study the behavior described in the food traceability system and structural design. The experiments on simulation data show that the proposed traceability method based on Internet of Things is more effective for mutton traceability data than the traditional K-means methods.

  12. A Fingerprint Minutiae Matching Method Based on Line Segment Vector

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Minutiae-based fingerprint matching is the most commonly used in an automatic fingerprint identification system. In this paper, we propose a minutia matching method based on line segment vector. This method uses all the detected minutiae (the ridge ending and the ridge bifurcation) in a fingerprint image to create a set of new vectors (line segment vector). Using these vectors, we can determine a truer reference point more efficiently. In addition, this new minutiae vector can also increase the accuracy of the minutiae matching. By experiment on the public domain collections of fingerprint images fvc2004 DB3 set A and DB4 set A, the result shows that our algorithm can obtain an improved verification performance.

  13. Method of pectus excavatum measurement based on structured light technique

    Science.gov (United States)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej

    2009-07-01

    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I3ds) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  14. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  15. Method of pectus excavatum measurement based on structured light technique.

    Science.gov (United States)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej

    2009-01-01

    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I(3ds)) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  16. Memristor Crossbar-based Hardware Implementation of IDS Method

    CERN Document Server

    Merrikh-Bayat, Farnood; Rohani, Ali

    2010-01-01

    Ink Drop Spread (IDS) is the engine of Active Learning Method (ALM), which is the methodology of soft computing. IDS, as a pattern-based processing unit, extracts useful information from a system subjected to modeling. In spite of its excellent potential in solving problems such as classification and modeling compared to other soft computing tools, finding its simple and fast hardware implementation is still a challenge. This paper describes a new hardware implementation of IDS method based on the memristor crossbar structure. In addition of simplicity, being completely real-time, having low latency and the ability to continue working after the occurrence of power breakdown are some of the advantages of our proposed circuit.

  17. Fluidized bed control system based on inverse system method

    Institute of Scientific and Technical Information of China (English)

    SONG Fu-hua; LI Ping

    2005-01-01

    The invertible of the Large Air Dense Medium Fluidized Bed (ADMFB) were studied by introducing the concept of the inverse system theory of nonlinear systems.Then the ADMFB, which was a multivariable, nonlinear and coupled strongly system,was decoupled into independent SISO pseudo-linear subsystems. Linear controllers were designed for each of subsystems based on linear systems theory. The practice output proves that this method improves the stability of the ADMFB obviously.

  18. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S.; Sedukhin, S. [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I.

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  19. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  20. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  1. Geophysics-based method of locating a stationary earth object

    Science.gov (United States)

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  2. Decoupling Control Method Based on Neural Network for Missiles

    Institute of Scientific and Technical Information of China (English)

    ZHAN Li; LUO Xi-shuang; ZHANG Tian-qiao

    2005-01-01

    In order to make the static state feedback nonlinear decoupling control law for a kind of missile to be easy for implementation in practice, an improvement is discussed. The improvement method is to introduce a BP neural network to approximate the decoupling control laws which are designed for different aerodynamic characteristic points, so a new decoupling control law based on BP neural network is produced after the network training. The simulation results on an example illustrate the approach obtained feasible and effective.

  3. TOPOLOGY DESCRIPTION FUNCTION BASED METHOD FOR MATERIAL DESIGN

    Institute of Scientific and Technical Information of China (English)

    Cao Xianfan; Liu Shutian

    2006-01-01

    The purpose of this paper is to investigate the application of topology description function (TDF) in material design. Using TDF to describe the topology of the microstructure,the formulation and the solving technique of the design problem of materials with prescribed mechanical properties are presented. By presenting the TDF as the sum of a series of basis functions determined by parameters, the topology optimization of material microstructure is formulated as a size optimization problem whose design variables are parameters of TDF basis functions and independent of the mesh of the design domain. By this method, high quality topologies for describing the distribution of constituent material in design domain can be obtained and checkerboard problem often met in the variable density method is avoided. Compared with the conventional level set method, the optimization problem can be solved simply by existing optimization techniques without the process to solve the‘Hamilton-Jacobi-type'equation by the difference method.The method proposed is illustrated with two 2D examples. One gives the unit cell with positive Poisson's ratio, the other with negative Poisson's ratio. The examples show the method based on TDF is effective for material design.

  4. Fuzzy-Based XML Knowledge Retrieval Methods in Edaphology

    Directory of Open Access Journals (Sweden)

    K. Naresh kumar

    2016-05-01

    Full Text Available In this paper, we propose a proficient method for knowledge management in Edaphology to assist the edaphologists and those related with agriculture in a big way. The proposed method mainly consists two sections of which the first one is to build the knowledge base using XML and the latter part deals with information retrieval by searching using fuzzy. Initially, the relational database is converted to the XML database. The paper discusses two algorithms, one is when the soil characteristics are inputted to have the plant list and in the other, plant names are inputted to have the soil characteristics suited for the plant. While retrieving the query result, the crisp numerical values are converted to fuzzy using the triangular fuzzy membership function and matched to those in database. And those which satisfy are added to the result list and subsequently the frequency is found out to rank the result list so as to obtain the final sorted list. Performance metrics used in order to evaluate the method and compare it to baseline paper are number of plants retrieved, ranking efficiency, and computation time and memory usage. Results obtained proved the validity of the method and the method obtained average computation time of 0.102 seconds and average memory usage of 2486 Kb, which all are far better than the previous method results.

  5. Edge detection methods based on generalized type-2 fuzzy logic

    CERN Document Server

    Gonzalez, Claudia I; Castro, Juan R; Castillo, Oscar

    2017-01-01

    In this book four new methods are proposed. In the first method the generalized type-2 fuzzy logic is combined with the morphological gra-dient technique. The second method combines the general type-2 fuzzy systems (GT2 FSs) and the Sobel operator; in the third approach the me-thodology based on Sobel operator and GT2 FSs is improved to be applied on color images. In the fourth approach, we proposed a novel edge detec-tion method where, a digital image is converted a generalized type-2 fuzzy image. In this book it is also included a comparative study of type-1, inter-val type-2 and generalized type-2 fuzzy systems as tools to enhance edge detection in digital images when used in conjunction with the morphologi-cal gradient and the Sobel operator. The proposed generalized type-2 fuzzy edge detection methods were tested with benchmark images and synthetic images, in a grayscale and color format. Another contribution in this book is that the generalized type-2 fuzzy edge detector method is applied in the preproc...

  6. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  7. A vision-based method for planar position measurement

    Science.gov (United States)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  8. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    Science.gov (United States)

    2010-01-01

    Background Classification of bacteria within the genus Brucella has been difficult due in part to considerable genomic homogeneity between the different species and biovars, in spite of clear differences in phenotypes. Therefore, many different methods have been used to assess Brucella taxonomy. In the current work, we examine 32 sequenced genomes from genus Brucella representing the six classical species, as well as more recently described species, using bioinformatical methods. Comparisons were made at the level of genomic DNA using oligonucleotide based methods (Markov chain based genomic signatures, genomic codon and amino acid frequencies based comparisons) and proteomes (all-against-all BLAST protein comparisons and pan-genomic analyses). Results We found that the oligonucleotide based methods gave different results compared to that of the proteome based methods. Differences were also found between the oligonucleotide based methods used. Whilst the Markov chain based genomic signatures grouped the different species in genus Brucella according to host preference, the codon and amino acid frequencies based methods reflected small differences between the Brucella species. Only minor differences could be detected between all genera included in this study using the codon and amino acid frequencies based methods. Proteome comparisons were found to be in strong accordance with current Brucella taxonomy indicating a remarkable association between gene gain or loss on one hand and mutations in marker genes on the other. The proteome based methods found greater similarity between Brucella species and Ochrobactrum species than between species within genus Agrobacterium compared to each other. In other words, proteome comparisons of species within genus Agrobacterium were found to be more diverse than proteome comparisons between species in genus Brucella and genus Ochrobactrum. Pan-genomic analyses indicated that uptake of DNA from outside genus Brucella appears to be

  9. Crack Diagnosis of Wind Turbine Blades Based on EMD Method

    Science.gov (United States)

    Hong-yu, CUI; Ning, DING; Ming, HONG

    2016-11-01

    Wind turbine blades are both the source of power and the core technology of wind generators. After long periods of time or in some extreme conditions, cracks or damage can occur on the surface of the blades. If the wind generators continue to work at this time, the crack will expand until the blade breaks, which can lead to incalculable losses. Therefore, a crack diagnosis method based on EMD for wind turbine blades is proposed in this paper. Based on aerodynamics and fluid-structure coupling theory, an aero-elastic analysis on wind turbine blades model is first made in ANSYS Workbench. Second, based on the aero-elastic analysis and EMD method, the blade cracks are diagnosed and identified in the time and frequency domains, respectively. Finally, the blade model, strain gauge, dynamic signal acquisition and other equipment are used in an experimental study of the aero-elastic analysis and crack damage diagnosis of wind turbine blades to verify the crack diagnosis method proposed in this paper.

  10. Bearing diagnostics: A method based on differential geometry

    Science.gov (United States)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  11. Image-based Water Level Measurement Method under Stained Ruler

    Institute of Scientific and Technical Information of China (English)

    Jae-do KIM; Young-joon HAN; Hern-soo HAHN

    2010-01-01

    This paper proposes the water level measuring method based on the image,while the ruler used to indicate the water level is stained.The contamination of the ruler weakens or eliminates many features which are required for the image processing.However,the feature of the color difference between the ruler and the water surface are firmer on the environmental change compare to the other features.As the color differences are embossed,only the region of the ruler is limited to eliminate the noise,and the average image is produced by using several continuous frames.A histogram is then produced based on the height axis of the produced intensity average image.Local peaks and local valleys are detected,and the section between the peak and valley which have the greatest change is looked for.The valley point at this very moment is used to detect the water level.The detected water level is then converted to the actual water level by using the mapping table.The proposed method is compared to the ultrasonic based method to evaluate its accuracy and efficiency on the various contaminated environments.

  12. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  13. Accurate Measurement Method for Tube's Endpoints Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    LIU Shaoli; JIN Peng; LIU Jianhua; WANG Xiao; SUN Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles,and their accurate assembly can directly affect the assembling reliability and the quality of products.It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly.However,the traditional tube inspection method is time-consuming and complex operations.Therefore,a new measurement method for a tube's endpoints based on machine vision is proposed.First,reflected light on tube's surface can be removed by using photometric linearization.Then,based on the optimization model for the tube's endpoint measurements and the principle of stereo matching,the global coordinates and the relative distance of the tube's endpoint are obtained.To confirm the feasibility,11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured.The experiment results show that the measurement repeatability accuracy is 0.167 mm,and the absolute accuracy is 0.328 mm.The measurement takes less than 1 min.The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  14. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2016-08-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  15. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations.

  16. A Method for Weight Multiplicity Computation Based on Berezin Quantization

    Directory of Open Access Journals (Sweden)

    David Bar-Moshe

    2009-09-01

    Full Text Available Let G be a compact semisimple Lie group and T be a maximal torus of G. We describe a method for weight multiplicity computation in unitary irreducible representations of G, based on the theory of Berezin quantization on G/T. Let Γ_{hol}(L^λ be the reproducing kernel Hilbert space of holomorphic sections of the homogeneous line bundle L^λ over G/T associated with the highest weight λ of the irreducible representation π_λ of G. The multiplicity of a weight m in π_λ is computed from functional analytical structure of the Berezin symbol of the projector in Γ_{hol}(L^λ onto subspace of weight m. We describe a method of the construction of this symbol and the evaluation of the weight multiplicity as a rank of a Hermitian form. The application of this method is described in a number of examples.

  17. Method of stereo matching based on genetic algorithm

    Science.gov (United States)

    Lu, Chaohui; An, Ping; Zhang, Zhaoyang

    2003-09-01

    A new stereo matching scheme based on image edge and genetic algorithm (GA) is presented to improve the conventional stereo matching method in this paper. In order to extract robust edge feature for stereo matching, infinite symmetric exponential filter (ISEF) is firstly applied to remove the noise of image, and nonlinear Laplace operator together with local variance of intensity are then used to detect edges. Apart from the detected edge, the polarity of edge pixels is also obtained. As an efficient search method, genetic algorithm is applied to find the best matching pair. For this purpose, some new ideas are developed for applying genetic algorithm to stereo matching. Experimental results show that the proposed methods are effective and can obtain good results.

  18. The professional portfolio: an evidence-based assessment method.

    Science.gov (United States)

    Byrne, Michelle; Schroeter, Kathryn; Carter, Shannon; Mower, Julie

    2009-12-01

    Competency assessment is critical for a myriad of disciplines, including medicine, law, education, and nursing. Many nurse managers and educators are responsible for nursing competency assessment, and assessment results are often used for annual reviews, promotions, and satisfying accrediting agencies' requirements. Credentialing bodies continually seek methods to measure and document the continuing competence of licensees or certificants. Many methods and frameworks for continued competency assessment exist. The portfolio process is one method to validate personal and professional accomplishments in an interactive, multidimensional manner. This article illustrates how portfolios can be used to assess competence. One specialty nursing certification board's process of creating an evidence-based portfolio for recertification or reactivation of a credential is used as an example. The theoretical background, development process, implementation, and future implications may serve as a template for other organizations in developing their own portfolio models.

  19. Sub-pixel mapping method based on BP neural network

    Institute of Scientific and Technical Information of China (English)

    LI Jiao; WANG Li-guo; ZHANG Ye; GU Yan-feng

    2009-01-01

    A new sub-pixel mapping method based on BP neural network is proposed in order to determine the spatial distribution of class components in each mixed pixel. The network was used to train a model that describes the relationship between spatial distribution of target components in mixed pixel and its neighboring information. Then the sub-pixel scaled target could be predicted by the trained model. In order to improve the performance of BP network, BP learning algorithm with momentum was employed. The experiments were conducted both on synthetic images and on hyperspectral imagery (HSI). The results prove that this method is capable of estimating land covers fairly accurately and has a great superiority over some other sub-pixel mapping methods in terms of computational complexity.

  20. Design Support Method Based on Analysis of Shape Impression

    Institute of Scientific and Technical Information of China (English)

    HITOMI Yokoyama; HIDEKI Aoyama

    2011-01-01

    In recent years, aesthetic design is becoming increasingly important in industrial product development due to the growing maturity of product functions. The designer is required to reflect consumer needs in the aesthetic design while giving consideration to the applications and functions of the product. For this reason, effective techniques enabling design creation based on consumer preference and needs are indispensable. The Taguchi method has been effectively used for the robust design of products. In this study, we proposed a design support method applying the Taguchi Method to robust design in respect to the inconsistencies of human kansei(sensitivity), and specifically applied it for quantitatively analyzing the robustness of design solutions created in accordance with the design concept of a digital camera.

  1. Fatigue crack identification method based on strain amplitude changing

    Science.gov (United States)

    Guo, Tiancai; Gao, Jun; Wang, Yonghong; Xu, Youliang

    2017-09-01

    Aiming at the difficulties in identifying the location and time of crack initiation in the castings of helicopter transmission system during fatigue tests, by introducing the classification diagnostic criteria of similar failure mode to find out the similarity of fatigue crack initiation among castings, an engineering method and quantitative criterion for detecting fatigue cracks based on strain amplitude changing is proposed. This method is applied on the fatigue test of a gearbox housing, whose results indicates: during the fatigue test, the system alarms when SC strain meter reaches the quantitative criterion. The afterwards check shows that a fatigue crack less than 5mm is found at the corresponding location of SC strain meter. The test result proves that the method can provide accurate test data for strength life analysis.

  2. Tilt correction method of text image based on wavelet pyramid

    Science.gov (United States)

    Yu, Mingyang; Zhu, Qiguo

    2017-04-01

    Text images captured by camera may be tilted and distorted, which is unfavorable for document character recognition. Therefore,a method of text image tilt correction based on wavelet pyramid is proposed in this paper. The first step is to convert the text image captured by cameras to binary images. After binarization, the images are layered by wavelet transform to achieve noise reduction, enhancement and compression of image. Afterwards,the image would bedetected for edge by Canny operator, and extracted for straight lines by Radon transform. In the final step, this method calculates the intersection of straight lines and gets the corrected text images according to the intersection points and perspective transformation. The experimental result shows this method can correct text images accurately.

  3. Face Recognition Method Based on Fuzzy 2DPCA

    Directory of Open Access Journals (Sweden)

    Xiaodong Li

    2014-01-01

    Full Text Available 2DPCA, which is one of the most important face recognition methods, is relatively sensitive to substantial variations in light direction, face pose, and facial expression. In order to improve the recognition performance of the traditional 2DPCA, a new 2DPCA algorithm based on the fuzzy theory is proposed in this paper, namely, the fuzzy 2DPCA (F2DPCA. In this method, applying fuzzy K-nearest neighbor (FKNN, the membership degree matrix of the training samples is calculated, which is used to get the fuzzy means of each class. The average of fuzzy means is then incorporated into the definition of the general scatter matrix with anticipation that it can improve classification result. The comprehensive experiments on the ORL, the YALE, and the FERET face database show that the proposed method can improve the classification rates and reduce the sensitivity to variations between face images caused by changes in illumination, face expression, and face pose.

  4. A Pansharpening Method Based on HCT and Joint Sparse Model

    Directory of Open Access Journals (Sweden)

    XU Ning

    2016-04-01

    Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.

  5. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  6. An analytical method for Mathieu oscillator based on method of variation of parameter

    Science.gov (United States)

    Li, Xianghong; Hou, Jingyu; Chen, Jufeng

    2016-08-01

    A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.

  7. Weak Signal Frequency Detection Method Based on Generalized Duffing Oscillator

    Institute of Scientific and Technical Information of China (English)

    SHI Si-Hong; YUAN Yong; WANG Hui-Qi; LUO Mao-Kang

    2011-01-01

    @@ The sensitive characteristic to the initial value of chaos system sufficiently demonstrates the superiority in weak signal parameters detection.Analyzing the current chaos-based frequency detection method, a novel generalized Duffing equation is proposed to detect weak signal frequency.By choosing a suitable adjusting factor, when the outside driving force frequency is equal to that of the detected signal, the generalized Duffing oscillator is in great period state, which can obtain the frequency information of the detected signal.The simulation results indicate this method is rapidly convenient and shows better accuracy.%The sensitive characteristic to the initial value of chaos system sufficiently demonstrates the superiority in weak signal parameters detection. Analyzing the current chaos-based frequency detection method, a novel generalized Duffing equation is proposed to detect weak signal frequency. By choosing a suitable adjusting factor, when the outside driving force frequency is equal to that of the detected signal, the generalized Duffing oscillator is in great period state, which can obtain the frequency information of the detected signal. The simulation results indicate this method is rapidly convenient and shows better accuracy.

  8. Gradient-based image recovery methods from incomplete Fourier measurements.

    Science.gov (United States)

    Patel, Vishal M; Maleh, Ray; Gilbert, Anna C; Chellappa, Rama

    2012-01-01

    A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods.

  9. A MUSIC-based method for SSVEP signal processing.

    Science.gov (United States)

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  10. MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE

    Institute of Scientific and Technical Information of China (English)

    NI Zhongxin; FEI Heliang

    2005-01-01

    In reliability theory and survival analysis,the problem of point estimation based on the censored sample has been discussed in many literatures.However,most of them are focused on MLE,BLUE etc;little work has been done on the moment-method estimation in censoring case.To make the method of moment estimation systematic and unifiable,in this paper,the moment-method estimators(abbr.MEs) and modified momentmethod estimators(abbr.MMEs) of the parameters based on type I and type Ⅱ censored samples are put forward involving mean residual lifetime. The strong consistency and other properties are proved. To be worth mentioning,in the exponential distribution,the proposed moment-method estimators are exactly MLEs. By a simulation study,in the view point of bias and mean square of error,we show that the MEs and MMEs are better than MLEs and the "pseudo complete sample" technique introduced in Whitten et al.(1988).And the superiority of the MEs is especially conspicuous,when the sample is heavily censored.

  11. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  12. Combining Neural Methods and Knowledge-Based Methods in Accident Management

    Directory of Open Access Journals (Sweden)

    Miki Sirola

    2012-01-01

    Full Text Available Accident management became a popular research issue in the early 1990s. Computerized decision support was studied from many points of view. Early fault detection and information visualization are important key issues in accident management also today. In this paper we make a brief review on this research history mostly from the last two decades including the severe accident management. The author’s studies are reflected to the state of the art. The self-organizing map method is combined with other more or less traditional methods. Neural methods used together with knowledge-based methods constitute a methodological base for the presented decision support prototypes. Two application examples with modern decision support visualizations are introduced more in detail. A case example of detecting a pressure drift on the boiling water reactor by multivariate methods including innovative visualizations is studied in detail. Promising results in early fault detection are achieved. The operators are provided by added information value to be able to detect anomalies in an early stage already. We provide the plant staff with a methodological tool set, which can be combined in various ways depending on the special needs in each case.

  13. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  14. Spatial Circular Granulation Method Based on Multimodal Finger Feature

    Directory of Open Access Journals (Sweden)

    Jinfeng Yang

    2016-01-01

    Full Text Available Finger-based personal identification has become an active research topic in recent years because of its high user acceptance and convenience. How to reliably and effectively fuse the multimodal finger features together, however, has still been a challenging problem in practice. In this paper, viewing the finger trait as the combination of a fingerprint, finger vein, and finger-knuckle-print, a new multimodal finger feature recognition scheme is proposed based on granular computing. First, the ridge texture features of FP, FV, and FKP are extracted using Gabor Ordinal Measures (GOM. Second, combining the three-modal GOM feature maps in a color-based manner, we then constitute the original feature object set of a finger. To represent finger features effectively, they are granulated at three levels of feature granules (FGs in a bottom-up manner based on spatial circular granulation. In order to test the performance of the multilevel FGs, a top-down matching method is proposed. Experimental results show that the proposed method achieves higher accuracy recognition rate in finger feature recognition.

  15. Convex Decomposition Based Cluster Labeling Method for Support Vector Clustering

    Institute of Scientific and Technical Information of China (English)

    Yuan Ping; Ying-Jie Tian; Ya-Jian Zhou; Yi-Xian Yang

    2012-01-01

    Support vector clustering (SVC) is an important boundary-based clustering algorithm in multiple applications for its capability of handling arbitrary cluster shapes. However,SVC's popularity is degraded by its highly intensive time complexity and poor label performance.To overcome such problems,we present a novel efficient and robust convex decomposition based cluster labeling (CDCL) method based on the topological property of dataset.The CDCL decomposes the implicit cluster into convex hulls and each one is comprised by a subset of support vectors (SVs).According to a robust algorithm applied in the nearest neighboring convex hulls,the adjacency matrix of convex hulls is built up for finding the connected components; and the remaining data points would be assigned the label of the nearest convex hull appropriately.The approach's validation is guaranteed by geometric proofs.Time complexity analysis and comparative experiments suggest that CDCL improves both the efficiency and clustering quality significantly.

  16. ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SONG Kaichen; NIE Xili

    2006-01-01

    Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.

  17. Research on ghost imaging method based on wavelet transform

    Science.gov (United States)

    Li, Mengying; He, Ruiqing; Chen, Qian; Gu, Guohua; Zhang, Wenwen

    2017-09-01

    We present an algorithm of extracting the wavelet coefficients of object based on ghost imaging (GI) system. Through modification of the projected random patterns by using a series of templates, wavelet transform GI (WTGI) can directly measure the high frequency components of wavelet coefficients without needing the original image. In this study, we theoretically and experimentally perform the high frequency components of wavelet coefficients detection with an arrow and a letter A based on GI and WTGI. Comparing with the traditional method, the use of the algorithm proposed in this paper can significantly improve the quality of the image of wavelet coefficients in both cases. The special advantages of GI will make the wavelet coefficient detection based on WTGI very valuable in real applications.

  18. Two Methods of AES Implementation Based on CPLD/FPGA

    Institute of Scientific and Technical Information of China (English)

    刘常澍; 彭艮鹏; 王晓卓

    2004-01-01

    This paper describes two single-chip--complex programmable logic devices/field programmable gate arrays(CPLD/FPGA)--implementations of the new advanced encryption standard (AES) algorithm based on the basic iteration architecture (design [A]) and the hybrid pipelining architecture (design [B]). Design [A] is an encryption-and-decryption implementation based on the basic iteration architecture. This design not only supports 128-bit, 192-bit, 256-bit keys, but saves hardware resources because of the iteration architecture and sharing technology. Design [B] is a method of the 2×2 hybrid pipelining architecture. Based on the AES interleaved mode of operation, the design successfully accomplishes the algorithm, which operates in the feedback mode (cipher block chaining). It not only guarantees security of encryption/decryption, but obtains high data throughput of 1.05 Gb/s. The two designs have been realized on Aitera's EP20k300EBC652-1 devices.

  19. A Systems-Science-Based Knowledge Explanation Method

    Institute of Scientific and Technical Information of China (English)

    李膺春

    2001-01-01

    es for analyzing, understanding, reforming and perfecting theobjective world. This paper presents a Systems-Science-Based Knowledge Model (SSBKM) to establish a more general knowledge structure model. It can be regarded as a development of frame representation for discovering and constructing slot structures as well as the frame structures. With this model the paper also presents a System-Sciences-Based Object-Oriented Analysis method (SSBOOA), which is a strategy to find and determine object classes and class structures, the relations between object instances of different classes, not to just explain classes. Finally, the paper illustrates knowledge analysis and computerizing (synthesizing) steps in an example of SSBKM of cognitive psychology-based CAI Network for Teaching Middle School Mathematics.

  20. Fully Digital Chaotic Differential Equation-based Systems And Methods

    KAUST Repository

    Radwan, Ahmed Gomaa Ahmed

    2012-09-06

    Various embodiments are provided for fully digital chaotic differential equation-based systems and methods. In one embodiment, among others, a digital circuit includes digital state registers and one or more digital logic modules configured to obtain a first value from two or more of the digital state registers; determine a second value based upon the obtained first values and a chaotic differential equation; and provide the second value to set a state of one of the plurality of digital state registers. In another embodiment, a digital circuit includes digital state registers, digital logic modules configured to obtain outputs from a subset of the digital shift registers and to provide the input based upon a chaotic differential equation for setting a state of at least one of the subset of digital shift registers, and a digital clock configured to provide a clock signal for operating the digital shift registers.

  1. Sparse coding based feature representation method for remote sensing images

    Science.gov (United States)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  2. An adjoint sensitivity-based data assimilation method and its comparison with existing variational methods

    Directory of Open Access Journals (Sweden)

    Yonghan Choi

    2014-01-01

    Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.

  3. A Novel Method in Food Safety Management by Using Case Base Reasoning Method

    Directory of Open Access Journals (Sweden)

    S. Saqaeeyan

    2015-09-01

    Full Text Available Today’s Food Industry has responsibility to provide most consuming food for people. These foods are consumed by large area of society. So they are important source of causes of diseases and food poisoning. Monitoring system have been created to control these diseases and they are used in duration of production step of food supply chain. Hazard Assurance Critical Control Point (HACCP is regarded as best method in safety system. Necessity to create integrated HACCP system forced factories to use intelligent methods to build HACCP for every production. This paper proposes Case-Based Reasoning (CBR technique and use of paired comparisons tables and similarity equations to create HACCP for food system of Sabz Nam Company. Our system is an intelligent system has based on RFID and it works as consulter by generating five proper safety suggestion to food expert. Finally we assess accuracy and efficiency of proposed system on real data of Sabz Nam Company.

  4. Selection of construction methods: a knowledge-based approach.

    Science.gov (United States)

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  5. Sphere-based calibration method for trinocular vision sensor

    Science.gov (United States)

    Lu, Rui; Shao, Mingwei

    2017-03-01

    A new method to calibrate a trinocular vision sensor is proposed and two main tasks are finished in this paper, i.e. to determine the transformation matrix between each two cameras and the trifocal tensor of the trinocular vision sensor. A flexible sphere target with several spherical circles is designed. As the isotropy of a sphere, trifocal tensor of the three cameras can be determined exactly from the feature on the sphere target. Then the fundamental matrix between each two cameras can be obtained. Easily, compatible rotation matrix and translation matrix can be deduced base on the singular value decomposition of the fundamental matrix. In our proposed calibration method, image points are not requested one-to-one correspondence. When image points locates in the same feature are obtained, the transformation matrix between each two cameras with the trifocal tensor of trinocular vision sensor can be determined. Experiment results show that the proposed calibration method can obtain precise results, including measurement and matching results. The root mean square error of distance is 0.026 mm with regard to the view field of about 200×200 mm and the feature matching of three images is strict. As a sphere projection is not concerned with its orientation, the calibration method is robust and with an easy operation. Moreover, our calibration method also provides a new approach to obtain the trifocal tensor.

  6. Numeric character recognition method based on fractal dimension

    Science.gov (United States)

    He, Tao; Xie, Yulang; Chen, Jiuyin; Cheng, Longfei; Yuan, Ye

    2013-10-01

    An image processing method based on fractal dimension is proposed in this paper. This method uses fractal dimension to process the character images firstly, and rises the analysis of each grid to the analysis of interrelation between the grids to eliminate interference. Box-counting method is commonly used for calculating fractal dimension of fractal, which uses small box whose side length is r ,that is the topological dimension of the box is d, to cover up the image. Because there are various levels of cavities and cracks, some small boxes are empty and some small boxes cover a part of fractal image which is called non-empty box (here refers to the average gray of the part that contained in the small box is larger than a certain threshold). We note down the number of non-empty boxes, analyze and calculate them. The method is used to image process the polluted characters, which can remove ink and scratches around the contour of the characters and remain basic contour, then the characters can be recognized by using template matching. In computer simulation experiment for polluted character recognition, this method can recognize the polluted characters quickly, which improve the accuracy of the recognition of the polluted characters.

  7. A Quality Based Method to Analyze Software Architectures

    Directory of Open Access Journals (Sweden)

    Farzaneh Hoseini Jabali

    2011-07-01

    Full Text Available In order to produce and develop a software system, it is necessary to have a method of choosing a suitable software architecture which satisfies the required quality attributes and maintains a trade-off between sometimes conflicting ones. Each software architecture includes a set of design decisions for each of which there are various alternatives, satisfying the quality attributes differently. At the same time various stakeholders with various quality goals participate in decision-making. In this paper a numerical method is proposed that based on the quality attributes selects the suitable software architecture for a certain software. In this method, for each design decision, different alternatives are compared in view of a certain quality attribute, and the other way around. Multi-criteria decision-making methods are used and, at the same time, time and cost constraints are considered in decision-making, too. The proposed method applies the stakeholders' opinions in decision-making according to the degree of their importance and helps the architect to select the best software architecture with more certainty.

  8. Improved Fast Fourier Transform Based Method for Code Accuracy Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The capability of the proposed method is discussed. In this study, the limitations of the FFTBM were analyzed. The FFTBM produces quantitatively different results due to its frequency dependence. Because the problem is intensified by including a lot of high frequency components, a new method using a reduced cut-off frequency was proposed. The results of the proposed method show that the shortcomings of FFTBM are considerably relieved. Among them, the fast Fourier transform based method (FFTBM) introduced in 1990 has been widely used to evaluate a code uncertainty or accuracy. Prosek et al., (2008) identified its drawbacks, the so-called 'edge effect'. To overcome the problems, an improved FFTBM by signal mirroring (FFTBM-SM) was proposed and it has been used up to now. In spite of the improvement, the FFTBM-SM yielded different accuracy depending on the frequency components of a parameter, such as pressure, temperature and mass flow rate. Therefore, it is necessary to reduce the frequency dependence of the FFTBMs. In this study, the deficiencies of the present FFTBMs are analyzed and a new method is proposed to mitigate its frequency dependence.

  9. A Blade Tip Timing Method Based on a Microwave Sensor

    Directory of Open Access Journals (Sweden)

    Jilong Zhang

    2017-05-01

    Full Text Available Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  10. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes.

  11. Deforming fluid domains within the finite element method: Five mesh-based tracking methods in comparison

    CERN Document Server

    Elgeti, Stefanie

    2015-01-01

    Fluid flow applications can involve a number of coupled problems. One is the simulation of free-surface flows, which require the solution of a free-boundary problem. Within this problem, the governing equations of fluid flow are coupled with a domain deformation approach. This work reviews five of those approaches: interface tracking using a boundary-conforming mesh and, in the interface capturing context, the level-set method, the volume-of-fluid method, particle methods, as well as the phase-field method. The history of each method is presented in combination with the most recent developments in the field. Particularly, the topics of extended finite elements (XFEM) and NURBS-based methods, such as Isogeometric Analysis (IGA), are addressed. For illustration purposes, two applications have been chosen: two-phase flow involving drops or bubbles and sloshing tanks. The challenges of these applications, such as the geometrically correct representation of the free surface or the incorporation of surface tension ...

  12. Three-dimensional beam propagation method based on the variable transformed Galerkin's method

    Institute of Scientific and Technical Information of China (English)

    XIAO Jinbiao; SUN Xiaohan; ZHANG Mingde

    2004-01-01

    A novel three-dimensional beam propagation method (BPM) based on the variable transformed Galerkin's method is introduced for simulating optical field propagation in three-dimensional dielectric structures. The infinite Cartesian x-y plane is mapped into a unit square by a tangent-type function transformation. Consequently, the infinite region problem is converted into the finite region problem. Thus, the boundary truncation is eliminated and the calculation accuracy is promoted. The three-dimensional BPM basic equation is reduced to a set of first-order ordinary differential equations through sinusoidal basis function, which fits arbitrary cladding optical waveguide, then direct solution of the resulting equations by means of the Runge-Kutta method. In addition,the calculation is efficient due to the small matrix derived from the present technique.Both z-invariant and z-variant examples are considered to test both the accuracy and utility of this approach.

  13. Combining Trigram-based and Feature-based Methods for Context-Sensitive Spelling Correction

    CERN Document Server

    Golding, A R; Golding, Andrew R.; Schabes, Yves

    1996-01-01

    This paper addresses the problem of correcting spelling errors that result in valid, though unintended words (such as ``peace'' and ``piece'', or ``quiet'' and ``quite'') and also the problem of correcting particular word usage errors (such as ``amount'' and ``number'', or ``among'' and ``between''). Such corrections require contextual information and are not handled by conventional spelling programs such as Unix `spell'. First, we introduce a method called Trigrams that uses part-of-speech trigrams to encode the context. This method uses a small number of parameters compared to previous methods based on word trigrams. However, it is effectively unable to distinguish among words that have the same part of speech. For this case, an alternative feature-based method called Bayes performs better; but Bayes is less effective than Trigrams when the distinction among words depends on syntactic constraints. A hybrid method called Tribayes is then introduced that combines the best of the previous two methods. The impr...

  14. Outline-based morphometrics, an overlooked method in arthropod studies?

    Science.gov (United States)

    Dujardin, Jean-Pierre; Kaba, D; Solano, P; Dupraz, M; McCoy, K D; Jaramillo-O, N

    2014-12-01

    Modern methods allow a geometric representation of forms, separating size and shape. In entomology, as well as in many other fields involving arthropod studies, shape variation has proved useful for species identification and population characterization. In medical entomology, it has been applied to very specific questions such as population structure, reinfestation of insecticide-treated areas and cryptic species recognition. For shape comparisons, great importance is given to the quality of landmarks in terms of comparability. Two conceptually and statistically separate approaches are: (i) landmark-based morphometrics, based on the relative position of a few anatomical "true" or "traditional" landmarks, and (ii) outline-based morphometrics, which captures the contour of forms through a sequence of close "pseudo-landmarks". Most of the studies on insects of medical, veterinary or economic importance make use of the landmark approach. The present survey makes a case for the outline method, here based on elliptic Fourier analysis. The collection of pseudo-landmarks may require the manual digitization of many points and, for this reason, might appear less attractive. It, however, has the ability to compare homologous organs or structures having no landmarks at all. This strength offers the possibility to study a wider range of anatomical structures and thus, a larger range of arthropods. We present a few examples highlighting its interest for separating close or cryptic species, or characterizing conspecific geographic populations, in a series of different vector organisms. In this simple application, i.e. the recognition of close or cryptic forms, the outline approach provided similar scores as those obtained by the landmark-based approach.

  15. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  16. A questionnaire based evaluation of teaching methods amongst MBBS students

    Directory of Open Access Journals (Sweden)

    Muneshwar JN, Mirza Shiraz Baig, Zingade US, Khan ST

    2013-01-01

    Full Text Available The medical education and health care in India are facing serious challenges in content and competencies. Heightened focus on the quality of teaching in medical college has led to increased use of student surveys as a means of evaluating teaching. Objectives: A questionnaire based evaluation of 200 students (I MBBS & II MBBS about teaching methods was conducted at a Govt Medical College & Hospital, Aurangabad (MS with intake capacity of 150 students &established since 50 last years. Methods: 200 medical students of I MBBS & II MBBS voluntarily participated in the study. Based on teaching methods, an objective questionnaire paper was given to the participants to be solved in 1 hour. Results: As a teaching mode 59% of the students favored group discussion versus didactic lectures (14%. Almost 48% felt that those didactic lectures fail to create interest & motivation. Around 66% were aware of learning objectives. Conclusion: Strategies and futuristic plans need to be implemented so that medical education in India is innovative & creates motivation.

  17. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    Science.gov (United States)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  18. Method of change management based on dynamic machining error propagation

    Institute of Scientific and Technical Information of China (English)

    FENG Jia; JIANG PingYu

    2009-01-01

    In multistage machining processes (MMPs), the final quality of a part is influenced by a series of machining processes, which are complex correlations. So it is necessary to research the rule of machining error propagation to ensure the machining quality. For this issue, a change management method of quality control nodes (i.e., QC-nodes) for machining error propagation is proposed. A new framework of QC-nodes is proposed including association analysis of quality attributes, quality closed-loop control,error tracing and error coordination optimization. And the weighted directed network is introduced to describe and analyze the correlativity among the machining processes. In order to establish the dynamic machining error propagation network (D-MEPN), QC-nodes are defined as the network nodes,and the correlation among the QC-nodes is mapped onto the network. Based on the network analysis,the dynamic characteristics of machining error propagation are explored. An adaptive control method based on the stability theory is introduced for error coordination optimization. At last, a simple example is used to verify the proposed method.

  19. Method of change management based on dynamic machining error propagation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In multistage machining processes(MMPs),the final quality of a part is influenced by a series of machining processes,which are complex correlations.So it is necessary to research the rule of machin-ing error propagation to ensure the machining quality.For this issue,a change management method of quality control nodes(i.e.,QC-nodes) for machining error propagation is proposed.A new framework of QC-nodes is proposed including association analysis of quality attributes,quality closed-loop control,error tracing and error coordination optimization.And the weighted directed network is introduced to describe and analyze the correlativity among the machining processes.In order to establish the dynamic machining error propagation network(D-MEPN),QC-nodes are defined as the network nodes,and the correlation among the QC-nodes is mapped onto the network.Based on the network analysis,the dynamic characteristics of machining error propagation are explored.An adaptive control method based on the stability theory is introduced for error coordination optimization.At last,a simple example is used to verify the proposed method.

  20. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  1. A multiparameter chaos control method based on OGY approach

    Energy Technology Data Exchange (ETDEWEB)

    Souza de Paula, Aline [Universidade Federal do Rio de Janeiro, COPPE - Department of Mechanical Engineering, 21.941.972 - Rio de Janeiro - RJ, P.O. Box 68.503 (Brazil)], E-mail: alinesp@ufrj.br; Amorim Savi, Marcelo [Universidade Federal do Rio de Janeiro, COPPE - Department of Mechanical Engineering, 21.941.972 - Rio de Janeiro - RJ, P.O. Box 68.503 (Brazil)], E-mail: savi@mecanica.ufrj.br

    2009-05-15

    Chaos control is based on the richness of responses of chaotic behavior and may be understood as the use of tiny perturbations for the stabilization of a UPO embedded in a chaotic attractor. Since one of these UPO can provide better performance than others in a particular situation the use of chaos control can make this kind of behavior to be desirable in a variety of applications. The OGY method is a discrete technique that considers small perturbations promoted in the neighborhood of the desired orbit when the trajectory crosses a specific surface, such as a Poincare section. This contribution proposes a multiparameter semi-continuous method based on OGY approach in order to control chaotic behavior. Two different approaches are possible with this method: coupled approach, where all control parameters influences system dynamics although they are not active; and uncoupled approach that is a particular case where control parameters return to the reference value when they become passive parameters. As an application of the general formulation, it is investigated a two-parameter actuation of a nonlinear pendulum control employing coupled and uncoupled approaches. Analyses are carried out considering signals that are generated by numerical integration of the mathematical model using experimentally identified parameters. Results show that the procedure can be a good alternative for chaos control since it provides a more effective UPO stabilization than the classical single-parameter approach.

  2. Integrated method for the measurement of trace atmospheric bases

    Science.gov (United States)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-09-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  3. Integrated method for the measurement of trace nitrogenous atmospheric bases

    Science.gov (United States)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-12-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  4. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  5. Integrated method for the measurement of trace atmospheric bases

    Directory of Open Access Journals (Sweden)

    D. Key

    2011-09-01

    Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  6. Design of time interval generator based on hybrid counting method

    Science.gov (United States)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  7. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    Science.gov (United States)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  8. Rule-based query answering method for a knowledge base of economic crimes

    CERN Document Server

    Bak, Jaroslaw

    2011-01-01

    We present a description of the PhD thesis which aims to propose a rule-based query answering method for relational data. In this approach we use an additional knowledge which is represented as a set of rules and describes the source data at concept (ontological) level. Queries are posed in the terms of abstract level. We present two methods. The first one uses hybrid reasoning and the second one exploits only forward chaining. These two methods are demonstrated by the prototypical implementation of the system coupled with the Jess engine. Tests are performed on the knowledge base of the selected economic crimes: fraudulent disbursement and money laundering.

  9. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    Science.gov (United States)

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-09-11

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3

  10. A Dynamic Job Shop Scheduling Method Based on Lagrangian Relaxation

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Due to the complexity of dynamic job shop scheduling in flexible manufacturing s ystem(FMS), many heuristic rules are still used today. A dynamic scheduling appr oach based on Lagrangian relaxation is proposed to improve the quality and guara ntee the real-time capability of dynamic scheduling. The proposed method makes use of the dynamic predictive optimal theory combined with Lagrangian relaxation to obtain a good solution that can be evaluated quantitatively. The Lagrangian multipliers introduced here are capable of describing machine predictive states and system capacity constraints. This approach can evaluate the suboptimality of the scheduling systems. It can also quickly obtain high quality feasible schedu les, thus enabling Lagrangian relaxation to be better used in the dynamic schedu ling of manufacturing system. The efficiency and effectiveness of this method ar e verified by numerical experiments.

  11. Design Method for EPS Control System Based on KANSEI Structure

    Science.gov (United States)

    Saitoh, Yumi; Itoh, Hideaki; Ozaki, Fuminori; Nakamura, Takenobu; Kawaji, Shigeyasu

    Recently, it has been identified that a KANSEI engineering plays an important role in functional design developing for realizing highly sophisticated products. However, in practical development methods, we design products and optimise the design trial and error, which indecates that we depend on the skill set of experts. In this paper, we focus on an automobile electric power steering (EPS) for which a functional design is required. First, the KANSEI structure is determined on the basis of the steering feeling of an experienced driver, and an EPS control design based on this KANSEI structure is proposed. Then, the EPS control parameters are adjusted in accordance with the KANSEI index. Finally, by assessing the experimental results obtained from the driver, the effectiveness of the proposed design method is verified.

  12. Nonlinear diffusion methods based on robust statistics for noise removal

    Institute of Scientific and Technical Information of China (English)

    JIA Di-ye; HUANG Feng-gang; SU Han

    2007-01-01

    A novel smoothness term of Bayesian regularization framework based on M-estimation of robust statistics is proposed, and from this term a class of fourth-order nonlinear diffusion methods is proposed. These methods attempt to approximate an observed image with a piecewise linear image, which looks more natural than piecewise constant image used to approximate an observed image by P-M[1] model. It is known that M-estimators and W-estimators are essentially equivalent and solve the same minimization problem. Then, we propose PL bilateral filter from equivalent W-estimator. This new model is designed for piecewise linear image filtering,which is more effective than normal bilateral filter.

  13. Diffusion-based method for producing density equalizing maps

    CERN Document Server

    Gastner, M T; Gastner, Michael T.

    2004-01-01

    Map makers have long searched for a way to construct cartograms -- maps in which the sizes of geographic regions such as countries or provinces appear in proportion to their population or some other analogous property. Such maps are invaluable for the representation of census results, election returns, disease incidence, and many other kinds of human data. Unfortunately, in order to scale regions and still have them fit together, one is normally forced to distort the regions' shapes, potentially resulting in maps that are difficult to read. Many methods for making cartograms have been proposed, some of them extremely complex, but all suffer either from this lack of readability or from other pathologies, like overlapping regions or strong dependence on the choice of coordinate axes. Here we present a new technique based on ideas borrowed from elementary physics that suffers none of these drawbacks. Our method is conceptually simple and produces useful, elegant, and easily readable maps. We illustrate the metho...

  14. Vision-based method for tracking meat cuts in slaughterhouses

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Engbo Jørgensen, Mikkel

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse...... environment. In this article, we demonstrate a computer vision system for recognizing meat cuts at different points along a slaughterhouse production line. More specifically, we show that 211 pig loins can be identified correctly between two photo sessions. The pig loins undergo various perturbation scenarios...... (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available....

  15. An underwater acoustic data compression method based on compressed sensing

    Institute of Scientific and Technical Information of China (English)

    郭晓乐; 杨坤德; 史阳; 段睿

    2016-01-01

    The use of underwater acoustic data has rapidly expanded with the application of multichannel, large-aperture underwater detection arrays. This study presents an underwater acoustic data compression method that is based on compressed sensing. Underwater acoustic signals are transformed into the sparse domain for data storage at a receiving terminal, and the improved orthogonal matching pursuit (IOMP) algorithm is used to reconstruct the original underwater acoustic signals at a data processing terminal. When an increase in sidelobe level occasionally causes a direction of arrival estimation error, the proposed compression method can achieve a 10 times stronger compression for narrowband signals and a 5 times stronger compression for wideband signals than the orthogonal matching pursuit (OMP) algorithm. The IOMP algorithm also reduces the computing time by about 20% more than the original OMP algorithm. The simulation and experimental results are discussed.

  16. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2015-01-01

    Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.

  17. Application of DNA-based methods in forensic entomology.

    Science.gov (United States)

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  18. A Sensitive Attribute based Clustering Method for kanonymization

    CERN Document Server

    Bhaladhare, Pawan R

    2012-01-01

    In medical organizations large amount of personal data are collected and analyzed by the data miner or researcher, for further perusal. However, the data collected may contain sensitive information such as specific disease of a patient and should be kept confidential. Hence, the analysis of such data must ensure due checks that ensure protection against threats to the individual privacy. In this context, greater emphasis has now been given to the privacy preservation algorithms in data mining research. One of the approaches is anonymization approach that is able to protect private information; however, valuable information can be lost. Therefore, the main challenge is how to minimize the information loss during an anonymization process. The proposed method is grouping similar data together based on sensitive attribute and then anonymizes them. Our experimental results show the proposed method offers better outcomes with respect to information loss and execution time.

  19. Fingerprint Representation Methods Based on B-Spline Functions

    Institute of Scientific and Technical Information of China (English)

    Ruan Ke; Xia De-lin; Yan Pu-liu

    2004-01-01

    The global characteristics of a fingerprint image such as the ridge shape and ridge topology are often ignored in most automatic fingerprint verification system. In this paper, a new representative method based on B-Spline curve is proposed to address this problem. The resultant B-Spline curves can represent the global characteristics completely and the curves are analyzable and precise. An algorithm is also proposed to extract the curves from the fingerprint image. In addition to preserve the most information of the fingerprint image, the knot-points number of the B-Spline curve is reduced to minimum in this algorithm. At the same time, the influence of the fingerprint image noise is discussed. In the end, an example is given to demonstrate the effectiveness of the representation method.

  20. Dominant partition method. [based on a wave function formalism

    Science.gov (United States)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  1. Novel parameter-based flexure bearing design method

    Science.gov (United States)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  2. Blue noise sampling method based on mixture distance

    Science.gov (United States)

    Qin, Hongxing; Hong, XiaoYang; Xiao, Bin; Zhang, Shaoting; Wang, Guoyin

    2014-11-01

    Blue noise sampling is a core component for a large number of computer graphic applications such as imaging, modeling, animation, and rendering. However, most existing methods are concentrated on preserving spatial domain properties like density and anisotropy, while ignoring feature preserving. In order to solve the problem, we present a new distance metric called mixture distance for blue noise sampling, which is a combination of geodesic and feature distances. Based on mixture distance, the blue noise property and features can be preserved by controlling the ratio of the geodesic distance to the feature distance. With the intention of meeting different requirements from various applications, an adaptive adjustment for parameters is also proposed to achieve a balance between the preservation of features and spatial properties. Finally, implementation on a graphic processing unit is introduced to improve the efficiency of computation. The efficacy of the method is demonstrated by the results of image stippling, surface sampling, and remeshing.

  3. Analysis of equivalent antenna based on FDTD method

    Institute of Scientific and Technical Information of China (English)

    Yun-xing YANG; Hui-chang ZHAO; Cui DI

    2014-01-01

    An equivalent microstrip antenna used in radio proximity fuse is presented. The design of this antenna is based on multilayer multi-permittivity dielectric substrate which is analyzed by finite difference time domain (FDTD) method. Equivalent iterative formula is modified in the condition of cylindrical coordinate system. The mixed substrate which contains two kinds of media (one of them is air)takes the place of original single substrate. The results of equivalent antenna simulation show that the resonant frequency of equivalent antenna is similar to that of the original antenna. The validity of analysis can be validated by means of antenna resonant frequency formula. Two antennas have same radiation pattern and similar gain. This method can be used to reduce the weight of antenna, which is significant to the design of missile-borne antenna.

  4. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado;

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  5. Density functional theory based generalized effective fragment potential method

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Kiet A., E-mail: kiet.nguyen@wpafb.af.mil, E-mail: ruth.pachter@wpafb.af.mil [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); UES, Inc., Dayton, Ohio 45432 (United States); Pachter, Ruth, E-mail: kiet.nguyen@wpafb.af.mil, E-mail: ruth.pachter@wpafb.af.mil [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); Day, Paul N. [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); General Dynamics Information Technology, Inc., Dayton, Ohio 45431 (United States)

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  6. Evaluation Method of Web Site Based on Web Structure Mining

    Institute of Scientific and Technical Information of China (English)

    LiJun-e; ZhouDong-ru

    2003-01-01

    The structure of Web site became more complex than before. During the design period of a Web site, the lack of model and method results in improper Web structure,which depend on the designer's experience. From the point of view of software engineering, every period in the software life must be evaluated before starting the next period's work. It is very important and essential to search relevant methods for evaluating Web structure before the site is completed. In this work, after studying the related work about the Web structure mining and analyzing the major structure mining methods (Page-rank and Hub/Authority), a method based on the Page-rank for Web structure evaluation in design stage is proposed. A Web structure modeling language WSML is designed, and the implement strategies for evaluating system of the Web site structure are given out. Web structure mining has being used mainly in search engines before. It is the first time to employ the Web structure mining technology to evaluate a Web structure in the design period of a Web site. It contributes to the formalization of the design documents for Web site and the improving of software engineering for large scale Web site, and the evaluating system is a practical tool for Web site construction.

  7. Reliability analysis method for slope stability based on sample weight

    Directory of Open Access Journals (Sweden)

    Zhi-gang YANG

    2009-09-01

    Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.

  8. Optimal grid-based methods for thin film micromagnetics simulations

    Science.gov (United States)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  9. Transit Traffic Analysis Zone Delineating Method Based on Thiessen Polygon

    Directory of Open Access Journals (Sweden)

    Shuwei Wang

    2014-04-01

    Full Text Available A green transportation system composed of transit, busses and bicycles could be a significant in alleviating traffic congestion. However, the inaccuracy of current transit ridership forecasting methods is imposing a negative impact on the development of urban transit systems. Traffic Analysis Zone (TAZ delineating is a fundamental and essential step in ridership forecasting, existing delineating method in four-step models have some problems in reflecting the travel characteristics of urban transit. This paper aims to come up with a Transit Traffic Analysis Zone delineation method as supplement of traditional TAZs in transit service analysis. The deficiencies of current TAZ delineating methods were analyzed, and the requirements of Transit Traffic Analysis Zone (TTAZ were summarized. Considering these requirements, Thiessen Polygon was introduced into TTAZ delineating. In order to validate its feasibility, Beijing was then taken as an example to delineate TTAZs, followed by a spatial analysis of office buildings within a TTAZ and transit station departure passengers. Analysis result shows that the TTAZs based on Thiessen polygon could reflect the transit travel characteristic and is of in-depth research value.

  10. Optimal sensor placement using FRFs-based clustering method

    Science.gov (United States)

    Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe

    2016-12-01

    The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.

  11. A novel model reduction method based on balanced truncation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The main goal of this paper is to construct an efficient reduced-order model (ROM) for unsteady aerodynamic force modeling. Balanced truncation (BT) is presented to address the problem. For conventional BT method, it is necessary to compute exact controllability and observability grammians. Although it is relatively straightforward to compute these matrices in a control setting where the system order is moderate, the technique does not extend easily to high order systems. In response to the challenge, snapshots-BT (S-BT) method is introduced for high order system ROM construction. The outline idea of the S-BT method is that snapshots of primary and dual system approximate the controllability and observability matrices in the frequency domain. The method has been demonstrated for 3 high order systems: (1) unsteady motion of a two-dimensional airfoil in response to gust, (2) AGARD 445.6 wing aeroelastic system, and (3) BACT (benchmark active control technology) standard aeroservoelastic system. All the results indicate that S-BT based ROM is efficient and accurate enough to provide a powerful tool for unsteady aerodynamic force modeling.

  12. Robust Collaborative Optimization Method Based on Dual-response Surface

    Institute of Scientific and Technical Information of China (English)

    WANG Wei; FAN Wenhui; CHANG Tianqing; YUAN Yuming

    2009-01-01

    A novel method for robust collaborative design of complex products based on dual-response surface (DRS-RCO) is proposed to solve multidisciplinary design optimization (MDO) problems under uncertainty. Collaborative optimization (CO) which decomposes the whole system into a double-level nonlinear optimization problem is widely Accepted as an efficient method to solve MDO problems. In order to improve the quality of complex product in design process, robust collaborative optimization (RCO) is developed to solve those problems under uncertain conditions. RCO does opfmiTation on the linear sum of mean and standard deviation of objective function and gets an optimal solution with high robustnmess. Response surfaces method is an important way to do approximation in robust design. DRS-RCO is an improved RCO method in which dual-response surface replaces system uncertainty analysis module of CO. The dual-response surface is the approximate model of mean and standard deviation of objective function respectively. In DRS-RCO, All the information of subsystems is included in dual-response surfaces. As an additional item, the standard deviation of objective function is added to the subsystem optimization. This item guarantee both the mean and standard deviation of this subsystem is reaching the minima at the same time. Finally, a test problem with two coupled subsystems is conducted to verify the feasibility and effectiveness of DRS-RCO.

  13. NUMERICAL METHOD FOR MULTI-BODY FLUID INTERACTION BASED ON IMMERSED BOUNDARY METHOD

    Institute of Scientific and Technical Information of China (English)

    MING Ping-jian; ZHANG Wen-ping

    2011-01-01

    A Cartesian grid based on Immersed Boundary Method(IBM),proposed by the present authors,is extended to unstructured grids.The advantages of IBM and Body Fitted Grid(BFG)are taken to enhance the computation efficiency of the fluid structure interaction in a complex domain.There are many methods to generate the BFG,among which the unstructured grid method is the most popular.The concept of Volume Of Solid(VOS)is used to deal with the multi rigid body and fluid interaction.Each body surface is represented by a set of points which can be traced in an anti-clockwise order with the solid area on the left side of surface.An efficient Lagrange point tracking algorithm on the fixed grid is applied to search the moving boundary grid points.This method is verified by low Reynolds number flows in the range from Re =100 to 1 000 in the cavity with a moving lid.The results are in a good agreement with experimental data in literature.Finally,the flow past two moving cylinders is simulated to test the capability of the method.

  14. Rapid Mapping Method Based on Free Blocks of Surveys

    Science.gov (United States)

    Yu, Xianwen; Wang, Huiqing; Wang, Jinling

    2016-06-01

    While producing large-scale larger than 1:2000 maps in cities or towns, the obstruction from buildings leads to difficult and heavy tasks of measuring mapping control points. In order to avoid measuring the mapping control points and shorten the time of fieldwork, in this paper, a quick mapping method is proposed. This method adjusts many free blocks of surveys together, and transforms the points from all free blocks of surveys into the same coordinate system. The entire surveying area is divided into many free blocks, and connection points are set on the boundaries between free blocks. An independent coordinate system of every free block is established via completely free station technology, and the coordinates of the connection points, detail points and control points in every free block in the corresponding independent coordinate systems are obtained based on poly-directional open traverses. Error equations are established based on connection points, which are determined together to obtain the transformation parameters. All points are transformed from the independent coordinate systems to a transitional coordinate system via the transformation parameters. Several control points are then measured by GPS in a geodetic coordinate system. All the points can then be transformed from the transitional coordinate system to the geodetic coordinate system. In this paper, the implementation process and mathematical formulas of the new method are presented in detail, and the formula to estimate the precision of surveys is given. An example has demonstrated that the precision of using the new method could meet large-scale mapping needs.

  15. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  16. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  17. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  18. Transistor-based particle detection systems and methods

    Science.gov (United States)

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  19. Storm surge model based on variational data assimilation method

    Institute of Scientific and Technical Information of China (English)

    Shi-li HUANG; Jian XU; De-guan WANG; Dong-yan LU

    2010-01-01

    By combining computation and observation information,the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting.It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge.By controlling the wind stress drag coefficient,the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon.In the data assimilation tests,the model accurately identified the wind stress drag coefficient and obtained results close to the true state.Then,the actual storm surge induced by Typhoon 0515 was forecast by the developed model,and the results demonstrate its efficiency in practical application.

  20. Method of Fire Image Identification Based on Optimization Theory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In view of some distinctive characteristics of the early-stage flame image, a corresponding method of characteristic extraction is presented. Also introduced is the application of the improved BP algorithm based on the optimization theory to identifying fire image characteristics. First the optimization of BP neural network adopting Levenberg-Marquardt algorithm with the property of quadratic convergence is discussed, and then a new system of fire image identification is devised. Plenty of experiments and field tests have proved that this system can detect the early-stage fire flame quickly and reliably.

  1. Development of a Contact Angle Measurement Method Based Upon Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Su; Pyo, Na Young; Seo, Seung Hee [Ewha Womans University, Seoul (Korea); Choi, Woo Jin [Suwon University, Suwon (Korea); Kwon, Young Shik [Suwon Science College, Suwon (Korea)

    1998-12-31

    A new way of contact angle measurement is derived based on simple geometrical calculation. Without using complicated contact angle measurement instrument, just measuring the diameter and height of liquid lens made it possible to calculate the contact angle value with a reasonable reliability. To validate the contact angle value obtained by this method, contact angle of the same liquid lens is measured using conventional goniometer and it is verified that two values are nearly same within the limit of observational error. (author). 6 refs., 2 tabs., 3 figs.

  2. Planning of operation & maintenance using risk and reliability based methods

    DEFF Research Database (Denmark)

    Florian, Mihai; Sørensen, John Dalsgaard

    2015-01-01

    Operation and maintenance (OM) of offshore wind turbines contributes with a substantial part of the total levelized cost of energy (LCOE). The objective of this paper is to present an application of risk- and reliability-based methods for planning of OM. The theoretical basis is presented...... and illustrated by an example, namely for planning of inspections and maintenance of wind turbine blades. A life-cycle approach is used where the total expected cost in the remaining lifetime is minimized. This maintenance plan is continuously updated during the lifetime using information from previous...... inspections and from condition monitoring with time intervals between inspections and maintenance / repair options as the decision parameters....

  3. New Iris Localization Method Based on Chaos Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    Jia Dongli; Muhammad Khurram Khan; Zhang Jiashu

    2005-01-01

    This paper present a new method based on Chaos Genetic Algorithm (CGA) to localize the human iris in a given image. First, the iris image is preprocessed to estimate the range of the iris localization, and then CGA is used to extract the boundary of the iris. Simulation results show that the proposed algorithms is efficient and robust, and can achieve sub pixel precision. Because Genetic Algorithms (GAs) can search in a large space, the algorithm does not need accurate estimation of iris center for subsequent localization, and hence can lower the requirement for original iris image processing. On this point, the present localization algirithm is superior to Daugmans algorithm.

  4. An infrared human face recognition method based on 2DPCA

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; Li Ting-jun

    2009-01-01

    Aimed at the problems of infrared image recognition under varying illumination, face disguise, etc. ,we bring out an infrared human face recognition algorithm based on 2DPCA. The proposed algorithm can work out the covariance matrix of the training sample easily and directly; at the same time, it costs less time to work out the eigenvector. Relevant experiments are carried out, and the result indicates that compared with the traditional recognition algorithm, the proposed recognition method is swift and has a good adaptability to the changes of human face posture.

  5. Efficient Path Query and Reasoning Method Based on Rare Axis

    Institute of Scientific and Technical Information of China (English)

    姜洋; 冯志勇; 王鑫马晓宁

    2015-01-01

    A new concept of rare axis based on statistical facts is proposed, and an evaluation algorithm is designed thereafter. For the nested regular expressions containing rare axes, the proposed algorithm can reduce its evaluation complexity from polynomial time to nearly linear time. The distributed technique is also employed to construct the navigation axis indexes for resource description framework (RDF) graph data. Experiment results in DrugBank and BioGRID show that this method can improve the query efficiency significantly while ensuring the accuracy and meet the query requirements on Web-scale RDF graph data.

  6. The Research of Welding Residual Stress Based Finite Element Method

    Directory of Open Access Journals (Sweden)

    Qinghua Bai

    2013-06-01

    Full Text Available Welding residual stress was caused by local heating during the welding process, tensile residual stress reduce fatigue strength and corrosion resistance, Compressive residual stress decreases stability limit. So it will produce brittle fracture, reduce working life and strength of workpiece; Based on the simulation of welding process with finite element method, calculate the welding temperature field and residual stress, and then measure residual stress in experiments, So as to get the best welding technology and welding parameters, to reduce welding residual stress effective, it has very important significance.

  7. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  8. Segmentation of Bacteria Image Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    WANG Hua; CHEN Chun-xiao; HU Yong-hong; YANG Wen-ge

    2008-01-01

    In biology ferment engineering, accurate statistics of the quantity of bacte-ria is one of the most important subjects. In this paper, the quantity of bacteria which was observed traditionally manuauy can be detected automatically. Image acquisition and pro-cessing system is designed to accomplish image preprocessing, image segmentation and statistics of the quantity of bacteria. Segmentation of bacteria images is successfully real-ized by means of a region-based level set method and then the quantity of bacteria is com-puted precisely, which plays an important role in optimizing the growth conditions of bac-teria.

  9. An Adaptive UKF Based SLAM Method for Unmanned Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2013-01-01

    Full Text Available This work proposes an improved unscented Kalman filter (UKF-based simultaneous localization and mapping (SLAM algorithm based on an adaptive unscented Kalman filter (AUKF with a noise statistic estimator. The algorithm solves the issue that conventional UKF-SLAM algorithms have declining accuracy, with divergence occurring when the prior noise statistic is unknown and time-varying. The new SLAM algorithm performs an online estimation of the statistical parameters of unknown system noise by introducing a modified Sage-Husa noise statistic estimator. The algorithm also judges whether the filter is divergent and restrains potential filtering divergence using a covariance matching method. This approach reduces state estimation error, effectively improving navigation accuracy of the SLAM system. A line feature extraction is implemented through a Hough transform based on the ranging sonar model. Test results based on unmanned underwater vehicle (UUV sea trial data indicate that the proposed AUKF-SLAM algorithm is valid and feasible and provides better accuracy than the standard UKF-SLAM system.

  10. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    Directory of Open Access Journals (Sweden)

    Sase Yuji

    2011-09-01

    Full Text Available Abstract Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views, knee (three views, wrist (two views, and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  11. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  12. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    Science.gov (United States)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  13. Impact of merging methods on radar based nowcasting of rainfall

    Science.gov (United States)

    Shehu, Bora; Haberlandt, Uwe

    2017-04-01

    Radar data with high spatial and temporal resolution are commonly used to track and predict rainfall patterns that serve as input for hydrological applications. To mitigate the high errors associated with the radar, many merging methods employing ground measurements have been developed. However these methods have been investigated mainly for simulation purposes, while for nowcasting they are limited to the application of the mean field bias correction. Therefore this study aims to investigate the impact of different merging methods on the nowcasting of the rainfall volumes regarding urban floods. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of spatial and temporal filters on the predictive skill of all methods. The relevance of the radar merging techniques is demonstrated by comparing the performance of the forecasted rainfall field from the radar tracking algorithm HyRaTrac for both raw and merged radar data. For this purpose several extreme events are selected and the respective performance is evaluated by cross validation of the continuous criteria (bias and rmse) and categorical criteria (POD, FAR and GSS) for lead times up to 2 hours. The study area is located within the 128 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps for the period 2000-2012. The results reveal how the choice of merging method and the implementation of filters impacts the performance of the forecast algorithm.

  14. A Progressive Image Compression Method Based on EZW Algorithm

    Science.gov (United States)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  15. An improved unsupervised clustering-based intrusion detection method

    Science.gov (United States)

    Hai, Yong J.; Wu, Yu; Wang, Guo Y.

    2005-03-01

    Practical Intrusion Detection Systems (IDSs) based on data mining are facing two key problems, discovering intrusion knowledge from real-time network data, and automatically updating them when new intrusions appear. Most data mining algorithms work on labeled data. In order to set up basic data set for mining, huge volumes of network data need to be collected and labeled manually. In fact, it is rather difficult and impractical to label intrusions, which has been a big restrict for current IDSs and has led to limited ability of identifying all kinds of intrusion types. An improved unsupervised clustering-based intrusion model working on unlabeled training data is introduced. In this model, center of a cluster is defined and used as substitution of this cluster. Then all cluster centers are adopted to detect intrusions. Testing on data sets of KDDCUP"99, experimental results demonstrate that our method has good performance in detection rate. Furthermore, the incremental-learning method is adopted to detect those unknown-type intrusions and it decreases false positive rate.

  16. An Effective Conversation-Based Botnet Detection Method

    Directory of Open Access Journals (Sweden)

    Ruidong Chen

    2017-01-01

    Full Text Available A botnet is one of the most grievous threats to network security since it can evolve into many attacks, such as Denial-of-Service (DoS, spam, and phishing. However, current detection methods are inefficient to identify unknown botnet. The high-speed network environment makes botnet detection more difficult. To solve these problems, we improve the progress of packet processing technologies such as New Application Programming Interface (NAPI and zero copy and propose an efficient quasi-real-time intrusion detection system. Our work detects botnet using supervised machine learning approach under the high-speed network environment. Our contributions are summarized as follows: (1 Build a detection framework using PF_RING for sniffing and processing network traces to extract flow features dynamically. (2 Use random forest model to extract promising conversation features. (3 Analyze the performance of different classification algorithms. The proposed method is demonstrated by well-known CTU13 dataset and nonmalicious applications. The experimental results show our conversation-based detection approach can identify botnet with higher accuracy and lower false positive rate than flow-based approach.

  17. Parallelized LEDAPS method for Remote Sensing Preprocessing Based on MPI

    Institute of Scientific and Technical Information of China (English)

    Xionghua; CHEN; Xu; ZHANG; Ying; GUO; Yong; MA; Yanchen; YANG

    2013-01-01

    Based on Landsat image,the Landsat Ecosystem Disturbance Adaptive Processing System(LEDAPS)uses radiation change detection method for image processing and offers the surface reflectivity products for ecosystem carbon sequestration and carbon reserves.As the accumulation of massive remote sensing data especially for the Landsat image,the traditional serial LEDAPS for image processing has a long cycle that make a lot of difficulties in practical application.For this problem,this paper design a high performance parallel LEDAPS processing method based on MPI.The results not only aimed to improve the calculation speed and save computing time,but also considered the load balance between the flexibly extended computing nodes.Results show that the highest speed ratio of parallelized LEDAPS reached 7.37 when the number of MPI process is 8.It effectively improves the ability of LEDAPS to handle massive remote sensing data and reduces the forest carbon stocks calculation cycle by using the remote sensing images.

  18. EP BASED PSO METHOD FOR SOLVING PROFIT BASED MULTI AREA UNIT COMMITMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    K. VENKATESAN

    2015-04-01

    Full Text Available This paper presents a new approach to solve the profit based multi area unit commitment problem (PBMAUCP using an evolutionary programming based particle swarm optimization (EPPSO method. The objective of this paper is to maximize the profit of generation companies (GENCOs with considering system social benefit. The proposed method helps GENCOs to make a decision, how much power and reserve should be sold in markets, and how to schedule generators in order to receive the maximum profit. Joint operation of generation resources can result in significant operational cost savings. Power transfer between the areas through the tie lines depends upon the operating cost of generation at each hour and tie line transfer limits. The tie line transfer limits were considered as a set of constraints during optimization process to ensure the system security and reliability. The overall algorithm can be implemented on an IBM PC, which can process a fairly large system in a reasonable period of time. Case study of four areas with different load pattern each containing 7 units (NTPS and 26 units connected via tie lines have been taken for analysis. Numerical results showed comparing the profit of evolutionary programming-based particle swarm optimization method (EPPSO with conventional dynamic programming (DP, evolutionary programming (EP, and particle swarm optimization (PSO method. Experimental results shows that the application of this evolutionary programming based particle swarm optimization method have the potential to solve profit based multi area unit commitment problem with lesser computation time.

  19. A Modified Beam Propagation Method Based on the Galerkin Method with Hermite-Gauss Basis Functions

    Institute of Scientific and Technical Information of China (English)

    Xiao Jinbiao; Liu Xu; Cai Chun; Fan Hehong; Sun Xiaohan

    2006-01-01

    A beam propagation method based on the Galerkin method with Hermite-Gauss basis functions for studying optical field propagation in weakly guiding dielectric structures is described. The selected basis functions naturally satisfy the required boundary conditions at infinity so that the boundary truncation is avoided. The paraxial propagation equation is converted into a set of first-order ordinary differential equations,which are solved by means of standard numerical library routines. Besides, the calculation is efficient due to its small resulted matrix. The evolution of the injected field and its normalized power along the propagation distance in an asymmetric slab waveguide and directional coupler are presented, and the solutions are good agreement with those obtained by finite difference BPM, which tests the validity of the present approach.

  20. Simulating Observer in Supervisory Control- A Domain-based Method

    Directory of Open Access Journals (Sweden)

    Seyed Morteza Babamir

    2012-06-01

    Full Text Available An Observer in the supervisory control observes responses of a discrete system to events of its environment and reports an unsafe/ critical situation if the response is undesired. An undesired response from the system indicates the system response does not adhere to users’ requirements of the system. Therefore, events and conditions of the system environment and user’s requirements of the system are basic elements to observer in determining correctness of the system response. However, the noteworthy matter is that the events, conditions, and requirements should be defined based on data of problem domain because discrete data are primary ingredients of the environment in discrete systems and they are used by system users as a gauge to express their requirements playing a vital role in safety-critical systems, such as medical and avionic ones. A large quantity of methods has already been proposed to model and simulate supervisory control of discrete systems however, a systematic method relying on data of problem domain is missing. Having extracted events, conditions, and user’s requirements from data of problem domain, a Petri-Net automaton is constructed for identifying violation of user’s requirements. The net constitutes the core of the observer and it is used to identify undesired responses of the system. In the third step, run-time simulation of the observer is suggested using multithreading mechanism and Task Parallel Library (TPL technology of Microsoft. Finally, a case study of a discrete concurrent system is proposed, the method applied and simulation results are analyzed based on the system implementation on a multi-core computer.

  1. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    Directory of Open Access Journals (Sweden)

    Henrik Karstoft

    2012-03-01

    Full Text Available Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis. The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs, which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision and a reasonable recognition of flushing (79–86%, 66–80% and landing behaviour(73–91%, 79–92%. The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linearcapabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of awildlife management system.

  2. TOWARDS A SYSTEM DYNAMICS MODELING METHOD BASED ON DEMATEL

    Directory of Open Access Journals (Sweden)

    Fadwa Chaker

    2015-05-01

    Full Text Available If System Dynamics (SD models are constructed based solely on decision makers' mental models and understanding of the context subject to study, then the resulting systems must necessarily bear some degree of deficiency due to the subjective, limited, and internally inconsistent mental models which led to the conception of these systems. As such, a systematic method for constructing SD models could be essentially helpful in overcoming the biases dictated by the human mind's limited understanding and conceptualization of complex systems. This paper proposes a novel combined method to support SD model construction. The classical Decision Making Trial and Evaluation Laboratory (DEMATEL technique is used to define causal relationships among variables of a system, and to construct the corresponding Impact Relation Maps (IRMs. The novelty of this paper stems from the use of the resulting total influence matrix to derive the system dynamic's Causal Loop Diagram (CLD and then define variable weights in the stock-flow chart equations. This new method allows to overcome the subjectivity bias of SD modeling while projecting DEMATEL in a more dynamic simulation environment, which could significantly improve the strategic choices made by analysts and policy makers.

  3. Selection of Construction Methods: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Ximena Ferrada

    2013-01-01

    Full Text Available The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method’ selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods’ selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  4. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    Science.gov (United States)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  5. A topographic parameter inversion method based on laser altimetry

    Institute of Scientific and Technical Information of China (English)

    HUANG ChunMing; ZHANG ShaoDong; CHEN Xi

    2012-01-01

    A topographic parameter inversion method based on laser altimetry is developed in this paper,which can be used to deduce the surface vertical profile and retrieve the topographic parameters within the laser footprints by analyzing and simulating return waveforms.This method comprises three steps.The first step is to build the numerical models for the whole measuring procedure of laser altimetry,construct digital elevation models for surfaces with different topographic parameters,and calculate return waveforms.The second step is to analyze the simulated return waveforms to obtain their characteristics parameters,summarize the effects of the topographic parameter variations on the characteristic parameters of simulated return waveforms,and analyze the observed return waveforms of laser altimeters to acquire their characteristic parameters at the same time.The last step is to match the characteristic parameters of the simulated and observed return waveforms,and deduce the topographic parameters within the laser footprint.This method can be used to retrieve the topographic parameters within the laser footprint from the observed return waveforms of spaceborne laser altimeters and to get knowledge about the surface altitude distribution within the laser footprint other than only getting the height of the surface encountered firstly by the laser beam,which extends laser altimeters' function and makes them more like radars.

  6. Molecular Phylogenetic: Organism Taxonomy Method Based on Evolution History

    Directory of Open Access Journals (Sweden)

    N.L.P Indi Dharmayanti

    2011-03-01

    Full Text Available Phylogenetic is described as taxonomy classification of an organism based on its evolution history namely its phylogeny and as a part of systematic science that has objective to determine phylogeny of organism according to its characteristic. Phylogenetic analysis from amino acid and protein usually became important area in sequence analysis. Phylogenetic analysis can be used to follow the rapid change of a species such as virus. The phylogenetic evolution tree is a two dimensional of a species graphic that shows relationship among organisms or particularly among their gene sequences. The sequence separation are referred as taxa (singular taxon that is defined as phylogenetically distinct units on the tree. The tree consists of outer branches or leaves that represents taxa and nodes and branch represent correlation among taxa. When the nucleotide sequence from two different organism are similar, they were inferred to be descended from common ancestor. There were three methods which were used in phylogenetic, namely (1 Maximum parsimony, (2 Distance, and (3 Maximum likehoood. Those methods generally are applied to construct the evolutionary tree or the best tree for determine sequence variation in group. Every method is usually used for different analysis and data.

  7. A method for MREIT-based source imaging: simulation studies

    Science.gov (United States)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  8. Research on Canal System Operation Based on Controlled Volume Method

    Directory of Open Access Journals (Sweden)

    Zhiliang Ding

    2009-10-01

    Full Text Available An operating simulation mode based on storage volume control method for multireach canal system in series was established. In allusion to the deficiency of existing controlled volume algorithm, the improved algorithm was proposed, that is the controlled volume algorithm of whole canal pools, the simulation results indicate that the storage volume and water level of each canal pool can be accurately controlled after the improved algorithm was adopted. However, for some typical discharge demand change operating conditions of canal, if the controlled volume algorithm of whole canal pool is still adopted, then it certainly will cause some unnecessary regulation, and consequently increases the disturbed canal reaches. Therefor, the idea of controlled volume operation method of continuous canal pools was proposed, and its algorithm was designed. Through simulation to practical project, the results indicate that the new controlled volume algorithm proposed for typical operating condition can comparatively obviously reduce the number of regulated check gates and disturbed canal pools for some typical discharge demand change operating conditions of canal, thus the control efficiency of canal system was improved. The controlled volume method of operation is specially suitable for large-scale water delivery canal system which possesses complex operation requirements.

  9. Emotion Recognition of Speech Signals Based on Filter Methods

    Directory of Open Access Journals (Sweden)

    Narjes Yazdanian

    2016-10-01

    Full Text Available Speech is the basic mean of communication among human beings.With the increase of transaction between human and machine, necessity of automatic dialogue and removing human factor has been considered. The aim of this study was to determine a set of affective features the speech signal is based on emotions. In this study system was designs that include three mains sections, features extraction, features selection and classification. After extraction of useful features such as, mel frequency cepstral coefficient (MFCC, linear prediction cepstral coefficients (LPC, perceptive linear prediction coefficients (PLP, ferment frequency, zero crossing rate, cepstral coefficients and pitch frequency, Mean, Jitter, Shimmer, Energy, Minimum, Maximum, Amplitude, Standard Deviation, at a later stage with filter methods such as Pearson Correlation Coefficient, t-test, relief and information gain, we came up with a method to rank and select effective features in emotion recognition. Then Result, are given to the classification system as a subset of input. In this classification stage, multi support vector machine are used to classify seven type of emotion. According to the results, that method of relief, together with multi support vector machine, has the most classification accuracy with emotion recognition rate of 93.94%.

  10. Histogram-Based Calibration Method for Pipeline ADCs

    Science.gov (United States)

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  11. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  12. An Automata Based Intrusion Detection Method for Internet of Things

    Directory of Open Access Journals (Sweden)

    Yulong Fu

    2017-01-01

    Full Text Available Internet of Things (IoT transforms network communication to Machine-to-Machine (M2M basis and provides open access and new services to citizens and companies. It extends the border of Internet and will be developed as one part of the future 5G networks. However, as the resources of IoT’s front devices are constrained, many security mechanisms are hard to be implemented to protect the IoT networks. Intrusion detection system (IDS is an efficient technique that can be used to detect the attackers when cryptography is broken, and it can be used to enforce the security of IoT networks. In this article, we analyzed the intrusion detection requirements of IoT networks and then proposed a uniform intrusion detection method for the vast heterogeneous IoT networks based on an automata model. The proposed method can detect and report the possible IoT attacks with three types: jam-attack, false-attack, and reply-attack automatically. We also design an experiment to verify the proposed IDS method and examine the attack of RADIUS application.

  13. [A Standing Balance Evaluation Method Based on Largest Lyapunov Exponent].

    Science.gov (United States)

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang; Zhao, Qing

    2015-12-01

    In order to evaluate the ability of human standing balance scientifically, we in this study proposed a new evaluation method based on the chaos nonlinear analysis theory. In this method, a sinusoidal acceleration stimulus in forward/backward direction was forced under the subjects' feet, which was supplied by a motion platform. In addition, three acceleration sensors, which were fixed to the shoulder, hip and knee of each subject, were applied to capture the balance adjustment dynamic data. Through reconstructing the system phase space, we calculated the largest Lyapunov exponent (LLE) of the dynamic data of subjects' different segments, then used the sum of the squares of the difference between each LLE (SSDLLE) as the balance capabilities evaluation index. Finally, 20 subjects' indexes were calculated, and compared with evaluation results of existing methods. The results showed that the SSDLLE were more in line with the subjects' performance during the experiment, and it could measure the body's balance ability to some extent. Moreover, the results also illustrated that balance level was determined by the coordinate ability of various joints, and there might be more balance control strategy in the process of maintaining balance.

  14. A Novel RTL Behavioral Description Based ATPG Method

    Institute of Scientific and Technical Information of China (English)

    YIN ZhiGang(尹志刚); MIN YingHua(闵应骅); LI XiaoWei(李晓维); LI HuaWei(李华伟)

    2003-01-01

    The paper proposes a novel ATPG (Automatic Test Pattern Generation) methodbased on RTL (Register Transfer Level) behavioral descriptions in HDL (Hardware DescriptionLanguage). The method is simulation-based. Firstly, it abstracts RTL behavioral descriptionsto Process Controlling Trees (PCT) and Data Dependency Graphs (DDG), which are used forbehavioral simulation and data tracing. Transfer faults are extracted from DDG edges, whichcompose a fault set needed for test generation. Then, simulation begins without specifying inputsin advance, and a request-echo strategy is used to fix some uncertain inputs if necessary. Finally,when the simulation ends, the partially fixed input sequence is the generated test sequence. Theproposed request-echo strategy greatly reduces unnecessary backtracking, and always tries to coveruncovered transfer faults. Therefore, the proposed method is very efficient, and generates tests withgood quality. Experimental results demonstrate that the proposed method is better than ARTISTin three aspects: (1) the CPU time is shorter by three orders of magnitude; (2) the test length isshorter by 52%; and (3) the fault coverage is higher by 0.89%.

  15. CamShift Tracking Method Based on Target Decomposition

    Directory of Open Access Journals (Sweden)

    Chunbo Xiu

    2015-01-01

    Full Text Available In order to avoid the inaccurate location or the failure tracking caused by the occlusion or the pose variation, a novel tracking method is proposed based on CamShift algorithm by decomposing the target into multiple subtargets for location separately. Distance correlation matrices are constructed by the subtarget sets in the template image and the scene image to evaluate the correctness of the location results. The error locations of the subtargets can be corrected by resolving the optimization function constructed according to the relative positions among the subtargets. The directions and sizes of the correctly located subtargets with CamShift algorithm are updated to reduce the disturbance of the background in the tracking progress. Simulation results show that the method can perform the location and tracking of the target and has better adaptability to the scaling, translation, rotation, and occlusion. Furthermore, the computational cost of the method increases slightly, and its average tracking computational time of the single frame is less than 25 ms, which can meet the real-time requirement of the TV tracking system.

  16. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering.

    Science.gov (United States)

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research.

  17. Study of Biometric Identification Method Based on Naked Footprint

    Directory of Open Access Journals (Sweden)

    Raji Rafiu King

    2013-10-01

    Full Text Available The scale of deployment of biometric identity-verification systems has recently seen an enormous increase owing to the need for more secure and reliable way of identifying people. Footprint identification which can be defined as the measurement of footprint features for recognizing the identity of a user has surfaced recently. This study is based on a biometric personal identification method using static footprint features viz. friction ridge / texture and foot shape / silhouette. To begin with, naked footprints of users are captured; images then undergo pre processing followed by the extraction of two features; shape using Gradient Vector Flow (GVF) snake model and minutiae extraction respectively. Matching is then effected based on these two features followed by a fusion of these two results for either a reject or accept decision. Our shape matching feature is based on cosine similarity while the texture one is based on miniature score matching. The results from our research establish that the naked footprint is a credible biometric feature as two barefoot impressions of an individual match perfectly while that of two different persons shows a great deal of dissimilarity. Normal 0 false false false IN X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Doi: 10.12777/ijse.5.2.29-35 How to cite this article: King

  18. Distance-Based Phylogenetic Methods Around a Polytomy.

    Science.gov (United States)

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  19. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-08-09

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An unbalanced spectra classification method based on entropy

    Science.gov (United States)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  1. Updating National Topographic Data Base Using Change Detection Methods

    Science.gov (United States)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  2. Image Prediction Method with Nonlinear Control Lines Derived from Kriging Method with Extracted Feature Points Based on Morphing

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-01-01

    Full Text Available Method for image prediction with nonlinear control lines which are derived from extracted feature points from the previously acquired imagery data based on Kriging method and morphing method is proposed. Through comparisons between the proposed method and the conventional linear interpolation and widely used Cubic Spline interpolation methods, it is found that the proposed method is superior to the conventional methods in terms of prediction accuracy.

  3. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method.

    Science.gov (United States)

    Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko

    2011-09-30

    Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  4. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  5. Physics-Based Imaging Methods for Terahertz Nondestructive Evaluation Applications

    Science.gov (United States)

    Kniffin, Gabriel Paul

    Lying between the microwave and far infrared (IR) regions, the "terahertz gap" is a relatively unexplored frequency band in the electromagnetic spectrum that exhibits a unique combination of properties from its neighbors. Like in IR, many materials have characteristic absorption spectra in the terahertz (THz) band, facilitating the spectroscopic "fingerprinting" of compounds such as drugs and explosives. In addition, non-polar dielectric materials such as clothing, paper, and plastic are transparent to THz, just as they are to microwaves and millimeter waves. These factors, combined with sub-millimeter wavelengths and non-ionizing energy levels, makes sensing in the THz band uniquely suited for many NDE applications. In a typical nondestructive test, the objective is to detect a feature of interest within the object and provide an accurate estimate of some geometrical property of the feature. Notable examples include the thickness of a pharmaceutical tablet coating layer or the 3D location, size, and shape of a flaw or defect in an integrated circuit. While the material properties of the object under test are often tightly controlled and are generally known a priori, many objects of interest exhibit irregular surface topographies such as varying degrees of curvature over the extent of their surfaces. Common THz pulsed imaging (TPI) methods originally developed for objects with planar surfaces have been adapted for objects with curved surfaces through use of mechanical scanning procedures in which measurements are taken at normal incidence over the extent of the surface. While effective, these methods often require expensive robotic arm assemblies, the cost and complexity of which would likely be prohibitive should a large volume of tests be needed to be carried out on a production line. This work presents a robust and efficient physics-based image processing approach based on the mature field of parabolic equation methods, common to undersea acoustics, seismology

  6. Correlation methods of base-level cycle based on wavelet neural network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The authors discussed the method of wavelet neural network (WNN) for correlation of base-level cycle. A new vectored method of well log data was proposed. Through the training with the known data set, the WNN can remenber the cycle pattern characteristic of the well log curves. By the trained WNN to identify the cycle pattern in the vectored log data, the ocrrelation process among the well cycles was completed. The application indicates that it is highly efficient and reliable in base-level cycle correlation.

  7. Real-time fault detection method based on belief rule base for aircraft navigation system

    Institute of Scientific and Technical Information of China (English)

    Zhao Xin; Wang Shicheng; Zhang Jinsheng; Fan Zhiliang; Min Haibo

    2013-01-01

    Real-time and accurate fault detection is essential to enhance the aircraft navigation system's reliability and safety.The existent detection methods based on analytical model draws back at simultaneously detecting gradual and sudden faults.On account of this reason,we propose an online detection solution based on non-analytical model.In this article,the navigation system fault detection model is established based on belief rule base (BRB),where the system measuring residual and its changing rate are used as the inputs of BRB model and the fault detection function as the output.To overcome the drawbacks of current parameter optimization algorithms for BRB and achieve online update,a parameter recursive estimation algorithm is presented for online BRB detection model based on expectation maximization (EM) algorithm.Furthermore,the proposed method is verified by navigation experiment.Experimental results show that the proposed method is able to effectively realize online parameter evaluation in navigation system fault detection model.The output of the detection model can track the fault state very well,and the faults can be diagnosed in real time and accurately.In addition,the detection ability,especially in the probability of false detection,is superior to offline optimization method,and thus the system reliability has great improvement.

  8. Evaluation of orthodontic treatment need by patient-based methods compared with normative method

    Directory of Open Access Journals (Sweden)

    Imaneh Asgari

    2013-01-01

    Full Text Available Background: A comprehensive system of assessing orthodontic need requires the integration of normative clinical measures with patient-based indicators. This study sought to discover weather an oral health-related quality of life measure or Aesthetic Component of Index of Orthodontic Treatment Need (AC-IOTN could be used as a predictor of orthodontic treatment need. Factors affecting the judgment of patient and dentist about this need are discussed. Materials and Methods: Oral examination on 597 Iranian students between 13 years and 18 years was done to reach the grade of Dental Health Component (DHC. The Child Oral Health Impact Profile (COHIP and AC-IOTN were recorded. The diagnostic values of subjective tests were assessed. Multiple logistic regressions were applied to investigate the role of variables in the persons′ perceptions. Results: Half of the 570 eligible students did not need orthodontic treatment either on professional or self-assessment; 60% of patients with definite need had a distinct impact on their quality of life. The specificity of AC to detect the healthy persons was excellent (0.99 but its sensitivity was low (0.08. COHIP score gave a better sensitivity but its specificity was 50%. Caries experience, quality of life, father′s education, and brushing habits were the factors relating to the same judgment of persons and dentists (P < 0.02. Conclusion: Regarding the discrepancies between two assessment methods, present instruments did not meet the predictor′s competencies. The patient-based methods could not substitute the professional assessment, but by identification, the persons with higher impacts would benefit the prioritization process.

  9. New Blocking Artifacts Reduction Method Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    SHI Min; YI Qing-ming

    2007-01-01

    It is well known that a block discrete cosine transform compressed image exhibits visually annoying blocking artifacts at low-bit-rate. A new post-processing deblocking algorithm in wavelet domain is proposed. The algorithm exploits blocking-artifact features shown in wavelet domain. The energy of blocking artifacts is concentrated into some lines to form annoying visual effects after wavelet transform. The aim of reducing blocking artifacts is to capture excessive energy on the block boundary effectively and reduce it below the visual scope. Adaptive operators for different subbands are computed based on the wavelet coefficients. The operators are made adaptive to different images and characteristics of blocking artifacts. Experimental results show that the proposed method can significantly improve the visual quality and also increase the peak signal-noise-ratio(PSNR) in the output image.

  10. Inversion method based on stochastic optimization for particle sizing.

    Science.gov (United States)

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  11. NEW METHOD FOR SHAPE RECOGNITION BASED ON DYNAMIC PROGRAMMING

    Directory of Open Access Journals (Sweden)

    NOREDINNE GHERABI

    2011-02-01

    Full Text Available In this paper we present a new method for shape recognition based on dynamic programming. First, each contour of shape is represented by a set of points. After alignment and matching between two shapes, the outline of the shape is divided into parts according to N angular and M radial sectors , Each Sector contains a portion of the contour; thisportion is divided at the inflexion points into convex and concave sections, and the information about sections are extracted in order to provide a semantic content to the outline shape, then this information are coded and transformed into a string of symbols. Finally we find the best alignment of two complete strings and compute the optimal cost of similarity. The algorithm has been tested on a large set of shape databases and real images (MPEG-7, natural silhouette database.

  12. Note: A manifold ranking based saliency detection method for camera

    Science.gov (United States)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  13. A Robust Tolerance Design Method Based on Fuzzy Quality Loss

    Institute of Scientific and Technical Information of China (English)

    CAO Yan-long; MAO Jian; YANG Jiang-xin; WU Zhao-tong; WU Li-qun

    2006-01-01

    The traditional tolerance design model ignores the impact of noise factor,so that the design may be infeasible due to variations in design constraints.Based on the analysis of fuzzy factors in tolerance design and the limitations ofthe traditional Taguchi squared quality loss function,a fuzzy quality loss function model utilizing fuzzy theory was introduced.Concepts on fuzzy quality loss and fuzzy quality loss cost were proposed in the model.The characteristics of the new model and the advantages over the traditional Taguchi quality loss function were analyzed.A robust tolerance design model using a fuzzy quality loss function was proposed.An example was given to illustrate the proposed model.Results and comparisons show that the method is suitable and reliable,and makes the conclusions more objective and reasonable.

  14. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  15. Proposed Arabic Text Steganography Method Based on New Coding Technique

    Directory of Open Access Journals (Sweden)

    Assist. prof. Dr. Suhad M. Kadhem

    2016-09-01

    Full Text Available Steganography is one of the important fields of information security that depend on hiding secret information in a cover media (video, image, audio, text such that un authorized person fails to realize its existence. One of the lossless data compression techniques which are used for a given file that contains many redundant data is run length encoding (RLE. Sometimes the RLE output will be expanded rather than compressed, and this is the main problem of RLE. In this paper we will use a new coding method such that its output will be contains sequence of ones with few zeros, so modified RLE that we proposed in this paper will be suitable for compression, finally we employ the modified RLE output for stenography purpose that based on Unicode and non-printed characters to hide the secret information in an Arabic text.

  16. A novel classification method based on membership function

    Science.gov (United States)

    Peng, Yaxin; Shen, Chaomin; Wang, Lijia; Zhang, Guixu

    2011-03-01

    We propose a method for medical image classification using membership function. Our aim is to classify the image as several classes based on a prior knowledge. For every point, we calculate its membership function, i.e., the probability that the point belongs to each class. The point is finally labeled as the class with the highest value of membership function. The classification is reduced to a minimization problem of a functional with arguments of membership functions. Three novelties are in our paper. First, bias correction and Rudin-Osher-Fatemi (ROF) model are adopted to the input image to enhance the image quality. Second, unconstrained functional is used. We use variable substitution to avoid the constraints that membership functions should be positive and with sum one. Third, several techniques are used to fasten the computation. The experimental result of ventricle shows the validity of this approach.

  17. Features-Based Deisotoping Method for Tandem Mass Spectra

    Directory of Open Access Journals (Sweden)

    Zheng Yuan

    2011-01-01

    Full Text Available For high-resolution tandem mass spectra, the determination of monoisotopic masses of fragment ions plays a key role in the subsequent peptide and protein identification. In this paper, we present a new algorithm for deisotoping the bottom-up spectra. Isotopic-cluster graphs are constructed to describe the relationship between all possible isotopic clusters. Based on the relationship in isotopic-cluster graphs, each possible isotopic cluster is assessed with a score function, which is built by combining nonintensity and intensity features of fragment ions. The non-intensity features are used to prevent fragment ions with low intensity from being removed. Dynamic programming is adopted to find the highest score path with the most reliable isotopic clusters. The experimental results have shown that the average Mascot scores and F-scores of identified peptides from spectra processed by our deisotoping method are greater than those by YADA and MS-Deconv software.

  18. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  19. Material measurement method based on femtosecond laser plasma shock wave

    Science.gov (United States)

    Zhong, Dong; Li, Zhongming

    2017-03-01

    The acoustic emission signal of laser plasma shock wave, which comes into being when femtosecond laser ablates pure Cu, Fe, and Al target material, has been detected by using the fiber Fabry-Perot (F-P) acoustic emission sensing probe. The spectrum characters of the acoustic emission signals for three kinds of materials have been analyzed and studied by using Fourier transform. The results show that the frequencies of the acoustic emission signals detected from the three kinds of materials are different. Meanwhile, the frequencies are almost identical for the same materials under different ablation energies and detection ranges. Certainly, the amplitudes of the spectral character of the three materials show a fixed pattern. The experimental results and methods suggest a potential application of the plasma shock wave on-line measurement based on the femtosecond laser ablating target by using the fiber F-P acoustic emission sensor probe.

  20. Harbourscape Aalborg - Design Based Methods in Waterfront Development

    DEFF Research Database (Denmark)

    Kiib, Hans

    2012-01-01

    How can city planners and developers gain knowledge and develop new sustainable concepts for water front developments? The waterfront is far too often threatened by new privatisation, lack of public access and bad architecture. And in a time where low growth rates and crises in the building indus...... in the regeneration of the harbour area, and a combination of methods and approaches used in order to achieve quality design and ownership from citizens as well as commitment from professionals....... industry is leaving great parts of the harbour as urban voids planners are in search of new tools for bridging the time gap until new projects can be a reality. This chapter presents the development of waterfront regeneration concepts that resulted from design based workshops, Harbourscape Aalborg in 2005...

  1. A DIRECT SEARCH FRAME-BASED CONJUGATE GRADIENTS METHOD

    Institute of Scientific and Technical Information of China (English)

    I.D. Coope; C.J. Price

    2004-01-01

    A derivative-free frame-based conjugate gradients algorithm is presented. Convergence is shown for C1 functions, and this is verified in numerical trials. The algorithm is tested on a variety of low dimensional problems, some of which are ill-conditioned, and is also tested on problems of high dimension. Numerical results show that the algorithm is effective on both classes of problems. The results are compared with those from a discrete quasiNewton method, showing that the conjugate gradients algorithm is competitive. The algorithm exhibits the conjugate gradients speed-up on problems for which the Hessian at the solution has repeated or clustered eigenvalues. The algorithm is easily parallelizable.

  2. Simultaneous least squares fitter based on the Langrange multiplier method

    CERN Document Server

    Guan, Yinghui; Zheng, Yangheng; Zhu, Yong-Sheng

    2013-01-01

    We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the $\\chi^2$ minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Langrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the $D^{0}-\\bar{D}^{0}$ mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.

  3. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  4. Methods and applications of positron-based medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, H. [Institute of Medicine, Forschungszentrum Juelich, D-52425 Juelich (Germany)]. E-mail: h.herzog@fz-juelich.de

    2007-02-15

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV {gamma}-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with {sup 11}C, {sup 13}N, {sup 15}O, or {sup 18}F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [{sup 18}F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  5. Yarn Properties Prediction Based on Machine Learning Method

    Institute of Scientific and Technical Information of China (English)

    YANG Jian-guo; L(U) Zhi-jun; LI Bei-zhi

    2007-01-01

    Although many works have been done to constructprediction models on yarn processing quality, the relationbetween spinning variables and yam properties has not beenestablished conclusively so far. Support vector machines(SVMs), based on statistical learning theory, are gainingapplications in the areas of machine learning and patternrecognition because of the high accuracy and goodgeneralization capability. This study briefly introduces theSVM regression algorithms, and presents the SVM basedsystem architecture for predicting yam properties. Model.selection which amounts to search in hyper-parameter spaceis performed for study of suitable parameters with grid-research method. Experimental results have been comparedwith those of artificial neural network(ANN) models. Theinvestigation indicates that in the small data sets and real-life production, SVM models are capable of remaining thestability of predictive accuracy, and more suitable for noisyand dynamic spinning process.

  6. Multispectral image filtering method based on image fusion

    Science.gov (United States)

    Zhang, Wei; Chen, Wei

    2015-12-01

    This paper proposed a novel filter scheme by image fusion based on Nonsubsampled ContourletTransform(NSCT) for multispectral image. Firstly, an adaptive median filter is proposed which shows great advantage in speed and weak edge preserving. Secondly, the algorithm put bilateral filter and adaptive median filter on image respectively and gets two denoised images. Then perform NSCT multi-scale decomposition on the de-noised images and get detail sub-band and approximate sub-band. Thirdly, the detail sub-band and approximate sub-band are fused respectively. Finally, the object image is obtained by inverse NSCT. Simulation results show that the method has strong adaptability to deal with the textural images. And it can suppress noise effectively and preserve the image details. This algorithm has better filter performance than the Bilateral filter standard and median filter and theirs improved algorithms for different noise ratio.

  7. Multi-core Processors based Network Intrusion Detection Method

    Directory of Open Access Journals (Sweden)

    Ziqian Wan

    2012-09-01

    Full Text Available It is becoming increasingly hard to build an intrusion detection system (IDS, because of the higher traffic throughput and the rising sophistication of attacking. Scale will be an important issue to address in the intrusion detection area. For hardware, tomorrow’s performance gains will come from multi-core architectures in which a number of CPU executes concurrently. We take the advantage of multi-core processors’ full power for intrusion detection in this work. We present an intrusion detection system based on the Snort open-source IDS that exploits the computational power of MIPS multi-core architecture to offload the costly pattern matching operations from the CPU, and thus increase the system’s processing throughput. A preliminary experiment demonstrates the potential of this system. The experiment results indicate that this method can be used effectively to speed up intrusion detection systems.

  8. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  9. Submarine Magnetic Field Extrapolation Based on Boundary Element Method

    Institute of Scientific and Technical Information of China (English)

    GAO Jun-ji; LIU Da-ming; YAO Qiong-hui; ZHOU Guo-hua; YAN Hui

    2007-01-01

    In order to master the magnetic field distribution of submarines in the air completely and exactly and study the magnetic stealthy performance of submarine, a mathematic model of submarine magnetic field extrapolation is built based on the boundary element method (BEM). An experiment is designed to measure three components of magnetic field on the envelope surface surrounding a model submarine. The data in differentheights above the model submarine are obtained by use of tri-axial magnetometers. The results show that this extrapolation model has good stabilities and high accuracies compared the measured data with the extrapolated data. Moreover, the model can reflect the submarine magnetic field distribution in the air exactly, and is valuable in practical engineering.

  10. Dealing with defaulting suppliers using behavioral based governance methods

    DEFF Research Database (Denmark)

    Prosman, Ernst Johannes; Scholten, Kirstin; Power, Damien

    2016-01-01

    Purpose: The aim of this paper is to explore factors influencing the effectiveness of buyer initiated Behavioral Based Governance Methods (BBGMs). The ability of BBGMs to improve supplier performance is assessed considering power imbalances and the resource intensiveness of the BBGM. Agency Theory...... is used as an interpretive lens. Design/methodology/approach: An explorative multiple case study approach is used to collect qualitative and quantitative data from buying companies involved in 13 BBGMs. Findings: Drawing on agency theory several factors are identified which can explain BBGM effectiveness...... considering power differences and the resource intensiveness of the BBGM. Our data show that even high resource intensive BBGMs can be implemented effectively if there are benefits for a powerful supplier. Cultural influences and uncertainty of the business environment also play a role. Originality...

  11. Classification data mining method based on dynamic RBF neural networks

    Science.gov (United States)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  12. Estimating genetic correlations based on phenotypic data: a simulation-based method

    Indian Academy of Sciences (India)

    Elias Zintzaras

    2011-04-01

    Knowledge of genetic correlations is essential to understand the joint evolution of traits through correlated responses to selection, a difficult and seldom, very precise task even with easy-to-breed species. Here, a simulation-based method to estimate genetic correlations and genetic covariances that relies only on phenotypic measurements is proposed. The method does not require any degree of relatedness in the sampled individuals. Extensive numerical results suggest that the propose method may provide relatively efficient estimates regardless of sample sizes and contributions from common environmental effects.

  13. Towards structural Web Services matching based on Kernel methods

    Institute of Scientific and Technical Information of China (English)

    NAN Kai; YU Jianjun; SU Hao; GUO Shengmin; ZHANG Hui; XU Ke

    2007-01-01

    This paper describes a kernel methods based Web Services matching mechanism for Web Services discovery and integration.The matching mechanism tries to exploit the latent semantics by the structure of Web Services.In this paper,Web Services are schemed by WSDL(Web Services Description Language)as tree-structured XML documents,and their matching degree is calculated by our novel algorithm designed for loosely tree matching against the traditional methods.In order to achieve the task,we bring forward the concept of path subsequence to model WSDL documents in the vector space.Then,an advanced n-spectrum kernel function is defined,so that the similarity of two WSDL documents can be drawn by implementing the kernel function in the space.Using textual similarity and n-spectrum kernel values as features of low-level and mid-level,we build up a model to estimate the functional similarity between Web Services,whose parameters are learned by a ranking-SVM.Finally,a set of experiments were designed to verify the model,and the results showed that several metrics for the retrieval of Web Services have been improved by our approach.

  14. A Molecular Selection Index Method Based on Eigenanalysis

    Science.gov (United States)

    Cerón-Rojas, J. Jesús; Castillo-González, Fernando; Sahagún-Castellanos, Jaime; Santacruz-Varela, Amalio; Benítez-Riquelme, Ignacio; Crossa, José

    2008-01-01

    The traditional molecular selection index (MSI) employed in marker-assisted selection maximizes the selection response by combining information on molecular markers linked to quantitative trait loci (QTL) and phenotypic values of the traits of the individuals of interest. This study proposes an MSI based on an eigenanalysis method (molecular eigen selection index method, MESIM), where the first eigenvector is used as a selection index criterion, and its elements determine the proportion of the trait's contribution to the selection index. This article develops the theoretical framework of MESIM. Simulation results show that the genotypic means and the expected selection response from MESIM for each trait are equal to or greater than those from the traditional MSI. When several traits are simultaneously selected, MESIM performs well for traits with relatively low heritability. The main advantages of MESIM over the traditional molecular selection index are that its statistical sampling properties are known and that it does not require economic weights and thus can be used in practical applications when all or some of the traits need to be improved simultaneously. PMID:18716338

  15. Phase extraction based on sinusoidal extreme strip phase shifting method

    Science.gov (United States)

    Hui, Mei; Liu, Ming; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    Multiple synthetic aperture imaging can enlarge pupil diameter of optical systems, and increase system resolution. Multiple synthetic aperture imaging is a cutting-edge topic and research focus in recent years, which is prospectively widely applied in fields like astronomical observations and aerospace remote sensing. In order to achieve good imaging quality, synthetic aperture imaging system requires phase extraction of each sub-aperture and co-phasing of whole aperture. In the project, an in-depth study about basic principles and methods of segments phase extraction was done. The study includes: application of sinusoidal extreme strip light irradiation phase shift method to extract the central dividing line to get segment phase extraction information, and the use of interference measurement to get the aperture phase extraction calibration coefficients of spherical surface. Study about influence of sinusoidal extreme strip phase shift on phase extraction, and based on sinusoidal stripe phase shift from multiple linear light sources of the illumination reflected image, to carry out the phase shift error for inhibiting the effect in the phase extracted frame.

  16. Bacteria counting method based on polyaniline/bacteria thin film.

    Science.gov (United States)

    Zhihua, Li; Xuetao, Hu; Jiyong, Shi; Xiaobo, Zou; Xiaowei, Huang; Xucheng, Zhou; Tahir, Haroon Elrasheid; Holmes, Mel; Povey, Malcolm

    2016-07-15

    A simple and rapid bacteria counting method based on polyaniline (PANI)/bacteria thin film was proposed. Since the negative effects of immobilized bacteria on the deposition of PANI on glass carbon electrode (GCE), PANI/bacteria thin films containing decreased amount of PANI would be obtained when increasing the bacteria concentration. The prepared PANI/bacteria film was characterized with cyclic voltammetry (CV) technique to provide quantitative index for the determination of the bacteria count, and electrochemical impedance spectroscopy (EIS) was also performed to further investigate the difference in the PANI/bacteria films. Good linear relationship of the peak currents of the CVs and the log total count of bacteria (Bacillus subtilis) could be established using the equation Y=-30.413X+272.560 (R(2)=0.982) over the range of 5.3×10(4) to 5.3×10(8)CFUmL(-1), which also showed acceptable stability, reproducibility and switchable ability. The proposed method was feasible for simple and rapid counting of bacteria. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  18. Dynamic airspace configuration method based on a weighted graph model

    Institute of Scientific and Technical Information of China (English)

    Chen Yangzhou; Zhang Defu

    2014-01-01

    This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph par-titioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm trans-fers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is com-pleted by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connec-tivity, as well as minimum distance constraint.

  19. Trinocular stereo vision method based on mesh candidates

    Science.gov (United States)

    Liu, Bin; Xu, Gang; Li, Haibin

    2010-10-01

    One of the most interesting goals of machine vision is 3D structure recovery of the scenes. This recovery has many applications, such as object recognition, reverse engineering, automatic cartography, autonomous robot navigation, etc. To meet the demand of measuring the complex prototypes in reverse engineering, a trinocular stereo vision method based on mesh candidates was proposed. After calibration of the cameras, the joint field of view can be defined in the world coordinate system. Mesh grid is established along the coordinate axes, and the mesh nodes are considered as potential depth data of the object surface. By similarity measure of the correspondence pairs which are projected from a certain group of candidates, the depth data can be obtained readily. With mesh nodes optimization, the interval between the neighboring nodes in depth direction could be designed reasonably. The potential ambiguity can be eliminated efficiently in correspondence matching with the constraint of a third camera. The cameras can be treated as two independent pairs, left-right and left-centre. Due to multiple peaks of the correlation values, the binocular method may not satisfy the accuracy of the measurement. Another image pair is involved if the confidence coefficient is less than the preset threshold. The depth is determined by the highest sum of correlation of both camera pairs. The measurement system was simulated using 3DS MAX and Matlab software for reconstructing the surface of the object. The experimental result proved that the trinocular vision system has good performance in depth measurement.

  20. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.