WorldWideScience

Sample records for line search method

  1. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    Science.gov (United States)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  2. Modification of the Armijo line search to satisfy the convergence properties of HS method

    Directory of Open Access Journals (Sweden)

    Mohammed Belloufi

    2013-07-01

    Full Text Available The Hestenes-Stiefel (HS conjugate gradient algorithm is a useful tool of unconstrainednumerical optimization, which has good numerical performance but no global convergence result under traditional line searches. This paper proposes a line search technique that guarantee the globalconvergence of the Hestenes-Stiefel (HS conjugate gradient method. Numerical tests are presented tovalidate the different approaches.

  3. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    Science.gov (United States)

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  4. A three-term conjugate gradient method under the strong-Wolfe line search

    Science.gov (United States)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  5. A conjugate gradient method with descent properties under strong Wolfe line search

    Science.gov (United States)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  6. An Efficient Hybrid Conjugate Gradient Method with the Strong Wolfe-Powell Line Search

    Directory of Open Access Journals (Sweden)

    Ahmad Alhawarat

    2015-01-01

    Full Text Available Conjugate gradient (CG method is an interesting tool to solve optimization problems in many fields, such as design, economics, physics, and engineering. In this paper, we depict a new hybrid of CG method which relates to the famous Polak-Ribière-Polyak (PRP formula. It reveals a solution for the PRP case which is not globally convergent with the strong Wolfe-Powell (SWP line search. The new formula possesses the sufficient descent condition and the global convergent properties. In addition, we further explained about the cases where PRP method failed with SWP line search. Furthermore, we provide numerical computations for the new hybrid CG method which is almost better than other related PRP formulas in both the number of iterations and the CPU time under some standard test functions.

  7. Frequency domain optical tomography using a conjugate gradient method without line search

    International Nuclear Information System (INIS)

    Kim, Hyun Keol; Charette, Andre

    2007-01-01

    A conjugate gradient method without line search (CGMWLS) is presented. This method is used to retrieve the local maps of absorption and scattering coefficients inside the tissue-like test medium, with the synthetic data. The forward problem is solved with a discrete-ordinates finite-difference method based on the frequency domain formulation of radiative transfer equation. The inversion results demonstrate that the CGMWLS can retrieve simultaneously the spatial distributions of optical properties inside the medium within a reasonable accuracy, by reducing cross-talk between absorption and scattering coefficients

  8. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    Science.gov (United States)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  9. Dai-Kou type conjugate gradient methods with a line search only using gradient.

    Science.gov (United States)

    Huang, Yuanyuan; Liu, Changhe

    2017-01-01

    In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

  10. [The smile line, a literature search].

    Science.gov (United States)

    van der Geld, P A; van Waas, M A

    2003-09-01

    Beautiful teeth, visible when smiling, are in line with the present ideal of beauty. The display of teeth when smiling is determined by the smile line: the projection of the lower border of the upper lip on the maxillary teeth when smiling. On the basis of a literature search the determining methods of the smile line are discussed, demographic data of the position of the smile line are given, and factors of influence are examined. There is no unequivocal method for determining the position of the smile line. A rough distinction can be made between qualitative and (semi)-quantitative methods. The (semi)-quantitative methods have clear advantages for research purposes, but their reliability is unknown. It was demonstrated that among minimally 40% of subjects the maxillary gingiva was not visible when smiling. The mandibular gingiva was not visible when smiling among more than 90% of subjects. Furthermore, it appeared that among women the smile line was on average higher situated than among men and that it has not yet been proven that the smile line will be situated lower when growing older.

  11. Simplified automatic on-line document searching

    International Nuclear Information System (INIS)

    Ebinuma, Yukio

    1983-01-01

    The author proposed searching method for users who need not-comprehensive retrieval. That is to provide flexible number of related documents for the users automatically. A group of technical terms are used as search terms to express an inquiry. Logical sums of the terms in the ascending order of frequency of the usage are prepared sequentially and automatically, and then the search formulas, qsub(m) and qsub(m-1) which meet certain threshold values are selected automatically also. Users justify precision of the search output up to 20 items retrieved by the formula qsub(m). If a user wishes more than 30% of recall ratio, the serach result should be output by qsub(m), and if he wishes less than 30% of it, it should be output by qsub(m-1). The search by this method using one year volume of INIS Database (76,600 items) and five inquiries resulted in 32% of recall ratio and 36% of precision ratio on the average in the case of qsub(m). The connecting time of a terminal was within 15 minutes per an inquiry. It showed more efficiency than that of an inexperienced searcher. The method can be applied to on-line searching system for database in which natural language only or natural language and controlled vocabulary are used. (author)

  12. A novel line segment detection algorithm based on graph search

    Science.gov (United States)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  13. Adaptive method of lines

    CERN Document Server

    Saucez, Ph

    2001-01-01

    The general Method of Lines (MOL) procedure provides a flexible format for the solution of all the major classes of partial differential equations (PDEs) and is particularly well suited to evolutionary, nonlinear wave PDEs. Despite its utility, however, there are relatively few texts that explore it at a more advanced level and reflect the method''s current state of development.Written by distinguished researchers in the field, Adaptive Method of Lines reflects the diversity of techniques and applications related to the MOL. Most of its chapters focus on a particular application but also provide a discussion of underlying philosophy and technique. Particular attention is paid to the concept of both temporal and spatial adaptivity in solving time-dependent PDEs. Many important ideas and methods are introduced, including moving grids and grid refinement, static and dynamic gridding, the equidistribution principle and the concept of a monitor function, the minimization of a functional, and the moving finite elem...

  14. Harmony Search for Balancing Two-sided Assembly Lines

    Directory of Open Access Journals (Sweden)

    Hindriyanto Dwi Purnomo

    2012-01-01

    Full Text Available Two-sided assembly lines balancing problems are important problem for large-sized products such as cars and buses, in which, tasks operations can be performed in the two sides of the line. In this paper, Harmony Search algorithm is proposed to solve two-sided assembly lines balancing problems type-I (TALBP-I. The proposed method adopts the COMSOAL heuristic and specific features of TALBP in the Harmony operators – the harmony memory consideration, random selection and pitch adjustment – in order to maintain the local and global search. The proposed method is evaluated based on 6 benchmark problems that are commonly used in TALBP. The experiment results show that the proposed method work well and produces better solution than the heuristic method and genetic algorithm.

  15. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  16. Automated search method for AFM and profilers

    Science.gov (United States)

    Ray, Michael; Martin, Yves C.

    2001-08-01

    A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.

  17. Efficient searching in meshfree methods

    Science.gov (United States)

    Olliff, James; Alford, Brad; Simkins, Daniel C.

    2018-04-01

    Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.

  18. SPIRES I: on-line search guide

    International Nuclear Information System (INIS)

    Addis, L.

    1975-06-01

    SPIRES I is the first generation of the on-line Stanford Public Information Retrieval System. Designed as a prototype system, SPIRES I was later moved to the SLAC computing facility where it has been routinely available to SLAC users in the field of high-energy physics. The scope and use of the SPIRES I system are described in this manual

  19. Noise propagation in iterative reconstruction algorithms with line searches

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced

  20. Statistic methods for searching inundated radioactive entities

    International Nuclear Information System (INIS)

    Dubasov, Yu.V.; Krivokhatskij, A.S.; Khramov, N.N.

    1993-01-01

    The problem of searching flooded radioactive object in a present area was considered. Various models of the searching route plotting are discussed. It is shown that spiral route by random points from the centre of the area examined is the most efficient one. The conclusion is made that, when searching flooded radioactive objects, it is advisable to use multidimensional statistical methods of classification

  1. Analytical Methods in Search Theory

    Science.gov (United States)

    1979-11-01

    X, t ) ,I pick g(x,t;E), *(x,tjc) and find the b necessary to satisfy the search equation. SOLUTION: This is an audience participation problem. It...Cnstotiaticon G11trant,’ ’I pp 2110 Path lestegsls,’ to pp., Jun IBM Iltetteol Pepsi pp., Ott 1313 (Tt o besubmitoet lot pubtinatteon l t Messino, Daidit

  2. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  3. Search for an in-line nitric acid analyzer

    International Nuclear Information System (INIS)

    Gallagher, K.Y.; Johnson, C.M.

    1980-10-01

    A literature search was conducted to identify possible techniques for measuring mineral acid normality in the presence of heavy metal salts, undissolved solids, and high radiation fields. Of the techniques noted, eight were identified that could be applicable to the objective. The possibilities were reduced to two methods, which warrant further investigation. These two are refractometry and a modified coulometric-polarographic method. All methods are discussed in detail followed by rationale for including or rejecting each for further investigation

  4. Evaluation of a new method for librarian-mediated literature searches for systematic reviews

    NARCIS (Netherlands)

    W.M. Bramer (Wichor); Rethlefsen, M.L. (Melissa L.); F. Mast (Frans); J. Kleijnen (Jos)

    2017-01-01

    textabstractObjective: To evaluate and validate the time of completion and results of a new method of searching for systematic reviews, the exhaustive search method (ESM), using a pragmatic comparison. Methods: Single-line search strategies were prepared in a text document. Term completeness was

  5. Numerical Methods for Plate Forming by Line Heating

    DEFF Research Database (Denmark)

    Clausen, Henrik Bisgaard

    2000-01-01

    Few researchers have addressed so far the topic Line Heating in the search for better control of the process. Various methods to help understanding the mechanics have been used, including beam analysis approximation, equivalent force calculation and three-dimensional finite element analysis. I...... consider here finite element methods to model the behaviour and to predict the heating paths....

  6. An introduction to harmony search optimization method

    CERN Document Server

    Wang, Xiaolei; Zenger, Kai

    2014-01-01

    This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche

  7. Employed and unemployed job search methods: Australian evidence on search duration, wages and job stability

    OpenAIRE

    Colin Green

    2012-01-01

    This paper examines the use and impact of job search methods of both unemployed and employed job seekers. Informal job search methods are associated with relativel high level of job exit and shorter search duration. Job exists through the public employment agency (PEA) display positive duration dependence for the unemployed. This may suggest that the PEA is used as a job search method of last resort. Informal job search methods have lower associated duration in search and higher wages than th...

  8. The method of search of tendencies

    International Nuclear Information System (INIS)

    Reuss, Paul.

    1981-08-01

    The search of tendencies is an application of the mean squares method. Its objective is the better possible evaluation of the basic data used in the calculations from the comparison between measurements of integral characteristics and the corresponding theoretical results. This report presents the minimization which allows the estimation of the basic data and, above all, the methods which are necessary for the critical analysis of the obtained results [fr

  9. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.

    Science.gov (United States)

    Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo

    2018-04-16

    Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.

  10. A Nonmonotone Line Search Filter Algorithm for the System of Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    Zhong Jin

    2012-01-01

    Full Text Available We present a new iterative method based on the line search filter method with the nonmonotone strategy to solve the system of nonlinear equations. The equations are divided into two groups; some equations are treated as constraints and the others act as the objective function, and the two groups are just updated at the iterations where it is needed indeed. We employ the nonmonotone idea to the sufficient reduction conditions and filter technique which leads to a flexibility and acceptance behavior comparable to monotone methods. The new algorithm is shown to be globally convergent and numerical experiments demonstrate its effectiveness.

  11. Analysis of the sweeped actuator line method

    OpenAIRE

    Nathan Jörn; Masson Christian; Dufresne Louis; Churchfield Matthew

    2015-01-01

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Cour...

  12. Harmony Search Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    X. Z. Gao

    2015-01-01

    Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  13. Heuristic method for searching global maximum of multimodal unknown function

    Energy Technology Data Exchange (ETDEWEB)

    Kamei, K; Araki, Y; Inoue, K

    1983-06-01

    The method is composed of three kinds of searches. They are called g (grasping)-mode search, f (finding)-mode search and c (confirming)-mode search. In the g-mode search and the c-mode search, a heuristic method is used which was extracted from search behaviors of human subjects. In f-mode search, the simplex method is used which is well known as a search method for unimodal unknown function. Each mode search and its transitions are shown in the form of flowchart. The numerical results for one-dimensional through six-dimensional multimodal functions prove the proposed search method to be an effective one. 11 references.

  14. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

    Directory of Open Access Journals (Sweden)

    Guo Bao-long

    2004-09-01

    Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

  15. A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow

    Directory of Open Access Journals (Sweden)

    Lluís Garrido

    2015-06-01

    Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.

  16. A method of searching LDAP directories using XQuery

    International Nuclear Information System (INIS)

    Hesselroth, Ted

    2011-01-01

    A method by which an LDAP directory can be searched using XQuery is described. The strategy behind the tool consists of four steps. First the XQuery script is examined and relevant XPath expressions are extracted, determined to be sufficient to define all information needed to perform the query. Then the XPath expressions are converted into their equivalent LDAP search filters by use of the published LDAP schema of the service, and search requests are made to the LDAP host. The search results are then merged and converted to an XML document that conforms to the hierarchy of the LDAP schema. Finally, the XQuery script is executed on the working XML document by conventional means. Examples are given of application of the tool in the Open Science Grid, which for discovery purposes operates an LDAP server that contains Glue schema-based information on site configuration and authorization policies. The XQuery scripts compactly replace hundreds of lines of custom python code that relied on the unix ldapsearch utility. Installation of the tool is available through the Virtual Data Toolkit.

  17. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    Science.gov (United States)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  18. Comparison tomography relocation hypocenter grid search and guided grid search method in Java island

    International Nuclear Information System (INIS)

    Nurdian, S. W.; Adu, N.; Palupi, I. R.; Raharjo, W.

    2016-01-01

    The main data in this research is earthquake data recorded from 1952 to 2012 with 9162 P wave and 2426 events are recorded by 30 stations located around Java island. Relocation hypocenter processed using grid search and guidded grid search method. Then the result of relocation hypocenter become input for tomography pseudo bending inversion process. It can be used to identification the velocity distribution in subsurface. The result of relocation hypocenter by grid search and guided grid search method after tomography process shown in locally and globally. In locally area grid search method result is better than guided grid search according to geological reseach area. But in globally area the result of guided grid search method is better for a broad area because the velocity variation is more diverse than the other one and in accordance with local geological research conditions. (paper)

  19. Representation Methods in AI. Searching by Graphs

    Directory of Open Access Journals (Sweden)

    Angel GARRIDO

    2012-12-01

    Full Text Available The historical origin of the Artificial Intelligence (A I is usually established in the Darmouth Conference, of 1956. But we can find many more arcane origins [1]. Also, we can consider, in more recent times, very great thinkers, as Janos Neumann (then, John von Neumann, arrived in USA, Norbert Wiener, Alan Mathison Turing, or Lofti Zadehfor instance [6, 7]. Frequently A I requires Logic. But its classical version shows too many insufficiencies. So, it was necessary to introduce more sophisticated tools, as fuzzy logic, modal logic, non-monotonic logic and so on [2]. Among the things that A I needs to represent are: categories, objects, properties, relations between objects, situations, states, time, events, causes and effects, knowledge about knowledge, and so on. The problems in A I can be classified in two general types [3, 4]: search problems and representation problems. In this last “mountain”, there exist different ways to reach their summit. So, we have [3]: logics, rules, frames, associative nets, scripts and so on, many times connectedamong them. We attempt, in this paper, a panoramic vision of the scope of application of such Representation Methods in A I. The two more disputable questions of both modern philosophy of mind and A I will be Turing Test and The Chinese Room Argument. To elucidate these very difficult questions, see both final Appendices.

  20. 26 CFR 1.167(b)-1 - Straight line method.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Straight line method. 1.167(b)-1 Section 1.167(b... Straight line method. (a) In general. Under the straight line method the cost or other basis of the... may be reduced to a percentage or fraction. The straight line method may be used in determining a...

  1. Efficient protein structure search using indexing methods.

    Science.gov (United States)

    Kim, Sungchul; Sael, Lee; Yu, Hwanjo

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.

  2. The retrieval efficiency test of descriptors and free vocabulary terms in INIS on-line search

    International Nuclear Information System (INIS)

    Ebinuma, Yukio; Takahashi, Satoko

    1981-01-01

    The test was done for 1) search topics with appropriate descriptors, 2) search topics with considerably broader descriptors, 3) search topics with no appropriate descriptors. As to (1) and (2) the retrieval efficiency was the same both on descriptor system and on keyword system (descriptors + free terms), and the search formulas were easily constructed. As to (3) the descriptor system ensured the recall ratio but decreased the precision ratio. On the other hand the keyword system made the construction of search formulas easy and resulted in good retrieval efficiency. The search system which is available both for full match method of descriptors and truncation method of keywords is desirable because each method can be selected according to the searcher's strategy and search topics. Free-term system seems unnecessary. (author)

  3. Method of lines solution of Richards` equation

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, C.T.; Miller, C.T.; Tocci, M.D.

    1996-12-31

    We consider the method of lines solution of Richard`s equation, which models flow through porous media, as an example of a situation in which the method can give incorrect results because of premature termination of the nonlinear corrector iteration. This premature termination arises when the solution has a sharp moving front and the Jacobian is ill-conditioned. While this problem can be solved by tightening the tolerances provided to the ODE or DAE solver used for the temporal integration, it is more efficient to modify the termination criteria of the nonlinear solver and/or recompute the Jacobian more frequently. In this paper we continue previous work on this topic by analyzing the modifications in more detail and giving a strategy on how the modifications can be turned on and off in response to changes in the character of the solution.

  4. A search for spectral lines in gamma-ray bursts using TGRS

    International Nuclear Information System (INIS)

    Kurczynski, P.; Palmer, D.; Seifert, H.; Teegarden, B. J.; Gehrels, N.; Cline, T. L.; Ramaty, R.; Hurley, K.; Madden, N. W.; Pehl, R. H.

    1998-01-01

    We present the results of an ongoing search for narrow spectral lines in gamma-ray burst data. TGRS, the Transient Gamma-Ray Spectrometer aboard the Wind satellite is a high energy-resolution Ge device. Thus it is uniquely situated among the array of space-based, burst sensitive instruments to look for line features in gamma-ray burst spectra. Our search strategy adopts a two tiered approach. An automated 'quick look' scan searches spectra for statistically significant deviations from the continuum. We analyzed all possible time accumulations of spectra as well as individual spectra for each burst. Follow-up analysis of potential line candidates uses model fitting with F-test and χ 2 tests for statistical significance

  5. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    Science.gov (United States)

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  6. An automated full-symmetry Patterson search method

    International Nuclear Information System (INIS)

    Rius, J.; Miravitlles, C.

    1987-01-01

    A full-symmetry Patterson search method is presented that performs a molecular coarse rotation search in vector space and orientation refinement using the σ function. The oriented molecule is positioned using the fast translation function τ 0 , which is based on the automated interpretation of τ projections using the sum function. This strategy reduces the number of Patterson-function values to be stored in the rotation search, and the use of the τ 0 function minimizes the required time for the development of all probable rotation search solutions. The application of this method to five representative test examples is shown. (orig.)

  7. Fast radio burst search: cross spectrum vs. auto spectrum method

    Science.gov (United States)

    Liu, Lei; Zheng, Weimin; Yan, Zhen; Zhang, Juan

    2018-06-01

    The search for fast radio bursts (FRBs) is a hot topic in current radio astronomy studies. In this work, we carry out a single pulse search with a very long baseline interferometry (VLBI) pulsar observation data set using both auto spectrum and cross spectrum search methods. The cross spectrum method, first proposed in Liu et al., maximizes the signal power by fully utilizing the fringe phase information of the baseline cross spectrum. The auto spectrum search method is based on the popular pulsar software package PRESTO, which extracts single pulses from the auto spectrum of each station. According to our comparison, the cross spectrum method is able to enhance the signal power and therefore extract single pulses from data contaminated by high levels of radio frequency interference (RFI), which makes it possible to carry out a search for FRBs in regular VLBI observations when RFI is present.

  8. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  9. Building maps to search the web: the method Sewcom

    Directory of Open Access Journals (Sweden)

    Corrado Petrucco

    2002-01-01

    Full Text Available Seeking information on the Internet is becoming a necessity 'at school, at work and in every social sphere. Unfortunately the difficulties' inherent in the use of search engines and the use of unconscious cognitive approaches inefficient limit their effectiveness. It is in this respect presented a method, called SEWCOM that lets you create conceptual maps through interaction with search engines.

  10. Job Search as Goal-Directed Behavior: Objectives and Methods

    Science.gov (United States)

    Van Hoye, Greet; Saks, Alan M.

    2008-01-01

    This study investigated the relationship between job search objectives (finding a new job/turnover, staying aware of job alternatives, developing a professional network, and obtaining leverage against an employer) and job search methods (looking at job ads, visiting job sites, networking, contacting employment agencies, contacting employers, and…

  11. A new mathematical formulation of the line-by-line method in case of weak line overlapping

    Science.gov (United States)

    Ishov, Alexander G.; Krymova, Natalie V.

    1994-01-01

    A rigorous mathematical proof is presented for multiline representation on the equivalent width of a molecular band which consists in the general case of n overlapping spectral lines. The multiline representation includes a principal term and terms of minor significance. The principal term is the equivalent width of the molecular band consisting of the same n nonoverlapping spectral lines. The terms of minor significance take into consideration the overlapping of two, three and more spectral lines. They are small in case of the weak overlapping of spectral lines in the molecular band. The multiline representation can be easily generalized for optically inhomogeneous gas media and holds true for combinations of molecular bands. If the band lines overlap weakly the standard formulation of line-by-line method becomes too labor-consuming. In this case the multiline representation permits line-by-line calculations to be performed more effectively. Other useful properties of the multiline representation are pointed out.

  12. The Number Density Evolution of Extreme Emission Line Galaxies in 3D-HST: Results from a Novel Automated Line Search Technique for Slitless Spectroscopy

    Science.gov (United States)

    Maseda, Michael V.; van der Wel, Arjen; Rix, Hans-Walter; Momcheva, Ivelina; Brammer, Gabriel B.; Franx, Marijn; Lundgren, Britt F.; Skelton, Rosalind E.; Whitaker, Katherine E.

    2018-02-01

    The multiplexing capability of slitless spectroscopy is a powerful asset in creating large spectroscopic data sets, but issues such as spectral confusion make the interpretation of the data challenging. Here we present a new method to search for emission lines in the slitless spectroscopic data from the 3D-HST survey utilizing the Wide-Field Camera 3 on board the Hubble Space Telescope. Using a novel statistical technique, we can detect compact (extended) emission lines at 90% completeness down to fluxes of 1.5(3.0)× {10}-17 {erg} {{{s}}}-1 {{cm}}-2, close to the noise level of the grism exposures, for objects detected in the deep ancillary photometric data. Unlike previous methods, the Bayesian nature allows for probabilistic line identifications, namely redshift estimates, based on secondary emission line detections and/or photometric redshift priors. As a first application, we measure the comoving number density of Extreme Emission Line Galaxies (restframe [O III] λ5007 equivalent widths in excess of 500 Å). We find that these galaxies are nearly 10× more common above z ∼ 1.5 than at z ≲ 0.5. With upcoming large grism surveys such as Euclid and WFIRST, as well as grisms featured prominently on the NIRISS and NIRCam instruments on the James Webb Space Telescope, methods like the one presented here will be crucial for constructing emission line redshift catalogs in an automated and well-understood manner. This work is based on observations taken by the 3D-HST Treasury Program and the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.

  13. Method of Improving Personal Name Search in Academic Information Service

    Directory of Open Access Journals (Sweden)

    Heejun Han

    2012-12-01

    Full Text Available All academic information on the web or elsewhere has its creator, that is, a subject who has created the information. The subject can be an individual, a group, or an institution, and can be a nation depending on the nature of the relevant information. Most information is composed of a title, an author, and contents. An essay which is under the academic information category has metadata including a title, an author, keyword, abstract, data about publication, place of publication, ISSN, and the like. A patent has metadata including the title, an applicant, an inventor, an attorney, IPC, number of application, and claims of the invention. Most web-based academic information services enable users to search the information by processing the meta-information. An important element is to search information by using the author field which corresponds to a personal name. This study suggests a method of efficient indexing and using the adjacent operation result ranking algorithm to which phrase search-based boosting elements are applied, and thus improving the accuracy of the search results of personal names. It also describes a method for providing the results of searching co-authors and related researchers in searching personal names. This method can be effectively applied to providing accurate and additional search results in the academic information services.

  14. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  15. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  16. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  17. Efficient algorithm for generating spectra using line-by-line methods

    International Nuclear Information System (INIS)

    Sonnad, V.; Iglesias, C.A.

    2011-01-01

    A method is presented for efficient generation of spectra using line-by-line approaches. The only approximation is replacing the line shape function with an interpolation procedure, which makes the method independent of the line profile functional form. The resulting computational savings for large number of lines is proportional to the number of frequency points in the spectral range. Therefore, for large-scale problems the method can provide speedups of two orders of magnitude or more. A method was presented to generate line-by-line spectra efficiently. The first step was to replace the explicit calculation of the profile by the Newton divided-differences interpolating polynomial. The second step is to accumulate the lines effectively reducing their number to the number of frequency points. The final step is recognizing the resulting expression as a convolution and amenable to FFT methods. The reduction in computational effort for a configuration-to-configuration transition array with large number of lines is proportional to the number of frequency points. The method involves no approximations except for replacing the explicit profile evaluation by interpolation. Specifically, the line accumulation and convolution are exact given the interpolation procedure. Furthermore, the interpolation makes the method independent of the line profile functional form contrary to other schemes using FFT methods to generate line-by-line spectra but relying on the analytic form of the profile Fourier transform. Finally, the method relies on a uniform frequency mesh. For non-uniform frequency meshes, however, the method can be applied by using a suitable temporary uniform mesh and the results interpolated onto the final mesh with little additional cost.

  18. Remarks on search methods for stable, massive, elementary particles

    International Nuclear Information System (INIS)

    Perl, Martin L.

    2001-01-01

    This paper was presented at the 69th birthday celebration of Professor Eugene Commins, honoring his research achievements. These remarks are about the experimental techniques used in the search for new stable, massive particles, particles at least as massive as the electron. A variety of experimental methods such as accelerator experiments, cosmic ray studies, searches for halo particles in the galaxy and searches for exotic particles in bulk matter are described. A summary is presented of the measured limits on the existence of new stable, massive particle

  19. ARSTEC, Nonlinear Optimization Program Using Random Search Method

    International Nuclear Information System (INIS)

    Rasmuson, D. M.; Marshall, N. H.

    1979-01-01

    1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays

  20. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  1. A Line-Tau Collocation Method for Partial Differential Equations ...

    African Journals Online (AJOL)

    This paper deals with the numerical solution of second order linear partial differential equations with the use of the method of lines coupled with the tau collocation method. The method of lines is used to convert the partial differential equation (PDE) to a sequence of ordinary differential equations (ODEs) which is then ...

  2. Search for gravitational redshifted absorption lines in LMXB Serpens X-1

    Science.gov (United States)

    Yoneda, Hiroki; Done, Chris; Paerels, Frits; Takahashi, Tadayuki; Watanabe, Shin

    2018-04-01

    The equation of state for ultradense matter can be tested from observations of the ratio of mass to radius of neutron stars. This could be measured precisely from the redshift of a narrow line produced on the surface. X-rays bursts have been intensively searched for such features, but so far without detection. Here instead we search for redshifted lines in the persistent emission, where the accretion flow dominates over the surface emission. We discuss the requirements for narrow lines to be produced, and show that narrow absorption lines from highly ionized iron can potentially be observable in accreting low-mass X-ray binaries (LMXBs; low B field) that have either low spin or low inclination so that Doppler broadening is small. This selects Serpens X-1 as the only potential candidate persistent LMXB due to its low inclination. Including surface models in the broad-band accretion flow model predicts that the absorption line from He-like iron at 6.7 keV should be redshifted to ˜5.1-5.7 keV (10-15 km for 1.4 M⊙) and have an equivalent width of 0.8-8 eV for surface temperatures of 7-10 × 106 K. We use the high-resolution Chandra grating data to give a firm upper limit of 2-3 eV for an absorption line at ˜5 keV. We discuss possible reasons for this lack of detection (the surface temperature and the geometry of the boundary layer etc.). Future instruments with better sensitivity are required in order to explore the existence of such features.

  3. Apparatus and Method for Communication over Power Lines

    Science.gov (United States)

    Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor); Greer, III, Lawrence C. (Inventor); Nappier, Jennifer M. (Inventor)

    2017-01-01

    An apparatus and method are provided for communicating over power lines. The apparatus includes a coupling modem that is situated between a power line and a device. The coupling modem is configured to demodulate a signal received from the power line into a sine signal and a cosine signal. The coupling modem is also configured to modulate a communicated bit stream received from the device into a transmitted signal in order to impose the transmitted signal onto the power line.

  4. The Impact of On-line Searching on Document Supply Services at Individual Libraries

    Science.gov (United States)

    Hosono, Kimio; Tanaka, Isao; Fukazawa, Yoshiko

    As use of on-line searching services has progressed in libraries, requests on primary materials have increased much more than before. For the purpose of clarifying this trend and countermeasures against it a survey by questionnaire was conducted in 1985. The respondants from totally 112 libraries are as follows ; 60 industrial libraries, 41 academic libraries and 11 libraries of research institutes and laboratories. It was shown that industrial libraries have received more requests on primary materials mostly resulting from on-line searching while the requests have not increased remarkably in academic libraries. Regardless the type of libraries, almost all libraries can not fully meet the requests with their own collection so that industrial libraries have to rely on external information services and academic libraries utilize interlibrary loan system. Requests are sent via on-line from industrial libraries, and by mail from academic libraries. In fact, any-libraries are not likely to review their material-collecting policy. Therefore it is urgent to establish the system which enables use of external information services or interlibrary loan system more than ever.

  5. A Generalised Sweep-Line Method for Safety Properties

    DEFF Research Database (Denmark)

    Mailund; Kristensen, Lars Michael

    2002-01-01

    The recently developed sweep-line method exploits progress present in many concurrent systems to explore the full state space of the system while storing only small fragments of the state space in memory at a time. A disadvantage of the sweep-line method is that it relies on a monotone and global...

  6. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  7. The search conference as a method in planning community health promotion actions

    Directory of Open Access Journals (Sweden)

    Eva Magnus

    2016-08-01

    Full Text Available Aims: The aim of this article is to describe and discuss how the search conference can be used as a method for planning health promotion actions in local communities. Design and methods: The article draws on experiences with using the method for an innovative project in health promotion in three Norwegian municipalities. The method is described both in general and how it was specifically adopted for the project. Results and conclusions: The search conference as a method was used to develop evidence-based health promotion action plans. With its use of both bottom-up and top-down approaches, this method is a relevant strategy for involving a community in the planning stages of health promotion actions in line with political expectations of participation, ownership, and evidence-based initiatives.

  8. Implementation Of Haversine Formula And Best First Search Method In Searching Of Tsunami Evacuation Route

    Science.gov (United States)

    Anisya; Yoga Swara, Ganda

    2017-12-01

    Padang is one of the cities prone to earthquake disaster with tsunami due to its position at the meeting of two active plates, this is, a source of potentially powerful earthquake and tsunami. Central government and most offices are located in the red zone (vulnerable areas), it will also affect the evacuation of the population during the earthquake and tsunami disaster. In this study, researchers produced a system of search nearest shelter using best-first-search method. This method uses the heuristic function, the amount of cost taken and the estimated value or travel time, path length and population density. To calculate the length of the path, researchers used method of haversine formula. The value obtained from the calculation process is implemented on a web-based system. Some alternative paths and some of the closest shelters will be displayed in the system.

  9. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  10. The commission errors search and assessment (CESA) method

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V. N

    2007-05-15

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  11. The commission errors search and assessment (CESA) method

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V. N.

    2007-05-01

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  12. Fuzzy Search Method for Hi Education Information Security

    Directory of Open Access Journals (Sweden)

    Grigory Grigorevich Novikov

    2016-03-01

    Full Text Available The main reason of the research is how to use fuzzy search method for information security of Hi Education or some similar purposes. So many sensitive information leaks are through non SUMMARY 149 classified documents legal publishing. That’s why many intelligence services so love to use the «mosaic» information collection method. This article is about how to prevent it.

  13. Geometrical Fuzzy Search Method for the Business Information Security Systems

    Directory of Open Access Journals (Sweden)

    Grigory Grigorievich Novikov

    2014-12-01

    Full Text Available The main reason of the article is how to use one of new fuzzy search method for information security of business or some other purposes. So many sensitive information leaks are through non-classified documents legal publishing. That’s why many intelligence services like to use the “mosaic” information collection method so much: This article is about how to prevent it.

  14. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  15. A Compositional Sweep-Line State Space Exploration Method

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Mailund, Thomas

    2002-01-01

    State space exploration is a main approach to verification of finite-state systems. The sweep-line method exploits a certain kind of progress present in many systems to reduce peak memory usage during state space exploration. We present a new sweep-line algorithm for a compositional setting where...

  16. Study on boundary search method for DFM mesh generation

    Directory of Open Access Journals (Sweden)

    Li Ri

    2012-08-01

    Full Text Available The boundary mesh of the casting model was determined by direct calculation on the triangular facets extracted from the STL file of the 3D model. Then the inner and outer grids of the model were identified by the algorithm in which we named Inner Seed Grid Method. Finally, a program to automatically generate a 3D FDM mesh was compiled. In the paper, a method named Triangle Contraction Search Method (TCSM was put forward to ensure not losing the boundary grids; while an algorithm to search inner seed grids to identify inner/outer grids of the casting model was also brought forward. Our algorithm was simple, clear and easy to construct program. Three examples for the casting mesh generation testified the validity of the program.

  17. The Use of Resistivity Methods in Terrestrial Forensic Searches

    Science.gov (United States)

    Wolf, R. C.; Raisuddin, I.; Bank, C.

    2013-12-01

    The increasing use of near-surface geophysical methods in forensic searches has demonstrated the need for further studies to identify the ideal physical, environmental and temporal settings for each geophysical method. Previous studies using resistivity methods have shown promising results, but additional work is required to more accurately interpret and analyze survey findings. The Ontario Provincial Police's UCRT (Urban Search and Rescue; Chemical, Biolgical, Radiological, Nuclear and Explosives; Response Team) is collaborating with the University of Toronto and two additional universities in a multi-year study investigating the applications of near-surface geophysical methods to terrestrial forensic searches. In the summer of 2012, on a test site near Bolton, Ontario, the OPP buried weapons, drums and pigs (naked, tarped, and clothed) to simulate clandestine graves and caches. Our study aims to conduct repeat surveys using an IRIS Syscal Junior with 48 electrode switching system resistivity-meter. These surveys will monitor changes in resistivity reflecting decomposition of the object since burial, and identify the strengths and weaknesses of resistivity when used in a rural, clandestine burial setting. Our initial findings indicate the usefulness of this method, as prominent resistivity changes have been observed. We anticipate our results will help to assist law enforcement agencies in determining the type of resistivity results to expect based on time since burial, depth of burial and state of dress of the body.

  18. Exploration of Stellarator Configuration Space with Global Search Methods

    International Nuclear Information System (INIS)

    Mynick, H.E.; Pomphrey, N.; Ethier, S.

    2001-01-01

    An exploration of stellarator configuration space z for quasi-axisymmetric stellarator (QAS) designs is discussed, using methods which provide a more global view of that space. To this end, we have implemented a ''differential evolution'' (DE) search algorithm in an existing stellarator optimizer, which is much less prone to become trapped in local, suboptimal minima of the cost function chi than the local search methods used previously. This search algorithm is complemented by mapping studies of chi over z aimed at gaining insight into the results of the automated searches. We find that a wide range of the attractive QAS configurations previously found fall into a small number of classes, with each class corresponding to a basin of chi(z). We develop maps on which these earlier stellarators can be placed, the relations among them seen, and understanding gained into the physics differences between them. It is also found that, while still large, the region of z space containing practically realizable QAS configurations is much smaller than earlier supposed

  19. Comparison of electric field exposure measurement methods under power lines

    International Nuclear Information System (INIS)

    Korpinen, L.; Kuisti, H.; Tarao, H.; Paeaekkoenen, R.; Elovaara, J.

    2014-01-01

    The object of the study was to investigate extremely low frequency (ELF) electric field exposure measurement methods under power lines. The authors compared two different methods under power lines: in Method A, the sensor was placed on a tripod; and Method B required the measurer to hold the meter horizontally so that the distance from him/her was at least 1.5 m. The study includes 20 measurements in three places under 400 kV power lines. The authors used two commercial three-axis meters, EFA-3 and EFA-300. In statistical analyses, they did not find significant differences between Methods A and B. However, in the future, it is important to take into account that measurement methods can, in some cases, influence ELF electric field measurement results, and it is important to report the methods used so that it is possible to repeat the measurements. (authors)

  20. Intercultural Pedagogy and Story-line as a Method

    DEFF Research Database (Denmark)

    Gregersen, Annette Søndergaard; Tiemensma, Britt Due

    2006-01-01

    The paper focuses on intercultural competence achieved through knowledge of the relationship of identity, culture and language. The theoretical approach will be combined with an outline of story-line as a method....

  1. Assessment of the effectiveness of uranium deposit searching methods

    International Nuclear Information System (INIS)

    Suran, J.

    1998-01-01

    The following groups of uranium deposit searching methods are described: radiometric review of foreign work; aerial radiometric survey; automobile radiometric survey; emanation survey up to 1 m; emanation survey up to 2 m; ground radiometric survey; radiometric survey in pits; deep radiometric survey; combination of the above methods; and other methods (drilling survey). For vein-type deposits, the majority of Czech deposits were discovered in 1945-1965 by radiometric review of foreign work, automobile radiometric survey, and emanation survey up to 1 m. The first significant indications of sandstone type uranium deposits were observed in the mid-1960 by aerial radiometric survey and confirmed later by drilling. (P.A.)

  2. Bayesian methods in the search for MH370

    CERN Document Server

    Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason

    2016-01-01

    This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.

  3. Cumulative query method for influenza surveillance using search engine data.

    Science.gov (United States)

    Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il

    2014-12-16

    Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.

  4. Computation of nonuniform transmission lines using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, G.C.; Paulino, J.O.S. [Universidade Federal de Minas Gerais, Belo Horizonte, MG (Brazil). School of Engineering

    1997-12-31

    Calculation of lightning overvoltages on transmission lines has been described. Lightning induced overvoltages are of great significance under certain conditions because of the main characteristics of the phenomena. The lightning channel model is one of the most important parameters essential to obtaining the generated electromagnetic fields. In this study, nonuniform transmission line equations were solved using the finite difference method and the leap-frog scheme, the Finite Difference Time Domain (FDTD) method. The subroutine was interfaced with the Electromagnetic Transients Program (EMTP). Two models were used to represent the characteristic impedance of the nonuniform lines used to model the transmission line towers and the lightning main channel. The advantages of the FDTD method was the much smaller code and faster processing time. 35 refs., 5 figs.

  5. Searches for H2O masers toward narrow-line Seyfert 1 galaxies

    Science.gov (United States)

    Yoshiaki, Hagiwara; Doi, Akihiro; Hachisuka, Kazuya; Horiuchi, Shinji

    2018-05-01

    We present searches for 22 GHz H2O masers toward 36 narrow-line Seyfert 1 galaxies (NLS1s), selected from known NLS1s with vsys ≲ 41000 km s-1. Out of the 36 NLS1s in our sample, 11 have been first surveyed in our observations, while the observations of other NLS1s were previously reported in literature. In our survey, no new water maser source from NLS1s was detected at the 3σ rms level of 8.4 mJy to 144 mJy, which depends on different observing conditions or inhomogeneous sensitivities of each observation using three different telescopes. It is likely that the non-detection of new masers in our NLS1 sample is primarily due to insufficient sensitivities of our observations. Including the five known NLS1 masers, the total detection rate of the H2O maser in NLS1s is not remarkably different from that of type 2 Seyfert galaxies or LINERs. However, more extensive and systematic searches of NLS1 would be required for a statistical discussion of the detection rate of the NLS1 maser, compared with that of type 2 Seyferts or LINERs.

  6. Enhanced Missing Proteins Detection in NCI60 Cell Lines Using an Integrative Search Engine Approach.

    Science.gov (United States)

    Guruceaga, Elizabeth; Garin-Muga, Alba; Prieto, Gorka; Bejarano, Bartolomé; Marcilla, Miguel; Marín-Vicente, Consuelo; Perez-Riverol, Yasset; Casal, J Ignacio; Vizcaíno, Juan Antonio; Corrales, Fernando J; Segura, Victor

    2017-12-01

    The Human Proteome Project (HPP) aims deciphering the complete map of the human proteome. In the past few years, significant efforts of the HPP teams have been dedicated to the experimental detection of the missing proteins, which lack reliable mass spectrometry evidence of their existence. In this endeavor, an in depth analysis of shotgun experiments might represent a valuable resource to select a biological matrix in design validation experiments. In this work, we used all the proteomic experiments from the NCI60 cell lines and applied an integrative approach based on the results obtained from Comet, Mascot, OMSSA, and X!Tandem. This workflow benefits from the complementarity of these search engines to increase the proteome coverage. Five missing proteins C-HPP guidelines compliant were identified, although further validation is needed. Moreover, 165 missing proteins were detected with only one unique peptide, and their functional analysis supported their participation in cellular pathways as was also proposed in other studies. Finally, we performed a combined analysis of the gene expression levels and the proteomic identifications from the common cell lines between the NCI60 and the CCLE project to suggest alternatives for further validation of missing protein observations.

  7. Evaluation of quinua lines obtained trough mutagenesis and conventional methods

    International Nuclear Information System (INIS)

    De la Cruz, E.; Garcia, J. M; Gonzalez, J.; Brunner, I.; Rubluo, A.; Guadarrama, S

    2001-01-01

    Chenopodium quinoa is an ancient crop that due to its hardiness, nutritive value, and ability to strive under marginal conditions is considered as a crop for modern times, considering the current demands of highly nutritive products obtained through sustainable agricultural practices. The research on quinua began in Mexico in the early eighties, considering it as an alternative crop to peasants living in impoverished regions, characterised by marginal soils. Research institutions such as the Instituto de Investigacion Agricola del Estado de Mexico (ICAMEX), Colegio de Postgraduados and Universidad Autonoma Chapingo, began to evaluate varieties from South-America and eventually established a breeding programme on quinua. One goal of the quinua research programme in Mexico is the reduction of saponin content, so a mutation breeding approach was designed in the early nineties being the Instituto Nacional de Investigaciones Nucleares (ININ) involved. In the 1999-2000 period, field trials of low saponin putative mutants obtained through irradiation of Barandales variety were performed, including also elite lines supplied by the National Germplasm Bank (NGB) at Chapingo, Mexico. The results from this trials indicate that the low saponin content character in the putative mutants remains in the M5 generation. Also eleven early maturing lines were detected. Climatic conditions prevalent in that period exhibited the resistance of quinua to spring frosts, furthermore, the evaluation of advanced lines from the (NGB) showed high variability regarding to morphological, agronomic and seed quality characters which Hill allow us to advance in the search for superior quinua lines

  8. The sweep-line state space exploration method

    DEFF Research Database (Denmark)

    Jensen, Kurt; Kristensen, Lars M.; Mailund, Thomas

    2012-01-01

    . The contribution of this paper is twofold. First, we provide a coherent presentation of the sweep-line theory and the many variants of the method that have been developed over the past 10 years since the basic idea of the method was conceived. Second, we survey a selection of case studies where the sweep...

  9. An Operator Perturbation Method of Polarized Line Transfer V ...

    Indian Academy of Sciences (India)

    tribpo

    imate Lambda Iteration) method to the resonance scattering in spectral lines formed in the presence of weak magnetic fields. The method is based on an operator perturbation approach, and can efficiently give solutions for oriented vector magnetic fields in the solar atmosphere. Key words. ... 1999 for observational.

  10. A Fast Radio Burst Search Method for VLBI Observation

    Science.gov (United States)

    Liu, Lei; Tong, Fengxian; Zheng, Weimin; Zhang, Juan; Tong, Li

    2018-02-01

    We introduce the cross-spectrum-based fast radio burst (FRB) search method for Very Long Baseline Interferometer (VLBI) observation. This method optimizes the fringe fitting scheme in geodetic VLBI data post-processing, which fully utilizes the cross-spectrum fringe phase information and therefore maximizes the power of single-pulse signals. Working with cross-spectrum greatly reduces the effect of radio frequency interference compared with using auto-power spectrum. Single-pulse detection confidence increases by cross-identifying detections from multiple baselines. By combining the power of multiple baselines, we may improve the detection sensitivity. Our method is similar to that of coherent beam forming, but without the computational expense to form a great number of beams to cover the whole field of view of our telescopes. The data processing pipeline designed for this method is easy to implement and parallelize, which can be deployed in various kinds of VLBI observations. In particular, we point out that VGOS observations are very suitable for FRB search.

  11. New method of interpretation of the solar spectrum lines

    International Nuclear Information System (INIS)

    Sitnik, G.F.

    1975-01-01

    Close triplet lines, which appear from a common low level or from low sublevels with small energy difference are used. It is supposed that for such lines both the function of the source in line and the frequency dependence of the selective coefficient of absorption are the same over any depth of the region of line production in the atmosphere and change with the depth equally. The residual intensities of some lines of the triplet, which are measured for every line at a number of profile points (at the same distance from its center and at different distances from the solar limb center), serve as initial material. The intensities are expressed in terms of the intensity of the continuous spectrum in the center of the solar limb for the average wave length of the triplet interval. Such observations avoid the usual assumption about the source function constancy in line inside the line. On the contrary, the frequency dependence of this function is clarified by the use of this method. Calculations make it possible to determine the source function in line and its long-wave dependence inside the line; to derive a solar atmosphere model and to determine the source function in the continuous spectrum in accordance with observations at profile points in its middle part and in the wing; to find the relation between the selective and continuous absorption coefficients in the dependence on the optical depth in the continuous spectrum; and to find the long-wave dependences of coefficients for both the selective absorption and the selective emission at different optical depths

  12. Numerical methods for two-phase flow with contact lines

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Clauido

    2012-07-01

    This thesis focuses on numerical methods for two-phase flows, and especially flows with a moving contact line. Moving contact lines occur where the interface between two fluids is in contact with a solid wall. At the location where both fluids and the wall meet, the common continuum descriptions for fluids are not longer valid, since the dynamics around such a contact line are governed by interactions at the molecular level. Therefore the standard numerical continuum models have to be adjusted to handle moving contact lines. In the main part of the thesis a method to manipulate the position and the velocity of a contact line in a two-phase solver, is described. The Navier-Stokes equations are discretized using an explicit finite difference method on a staggered grid. The position of the interface is tracked with the level set method and the discontinuities at the interface are treated in a sharp manner with the ghost fluid method. The contact line is tracked explicitly and its dynamics can be described by an arbitrary function. The key part of the procedure is to enforce a coupling between the contact line and the Navier-Stokes equations as well as the level set method. Results for different contact line models are presented and it is demonstrated that they are in agreement with analytical solutions or results reported in the literature.The presented Navier-Stokes solver is applied as a part in a multiscale method to simulate capillary driven flows. A relation between the contact angle and the contact line velocity is computed by a phase field model resolving the micro scale dynamics in the region around the contact line. The relation of the microscale model is then used to prescribe the dynamics of the contact line in the macro scale solver. This approach allows to exploit the scale separation between the contact line dynamics and the bulk flow. Therefore coarser meshes can be applied for the macro scale flow solver compared to global phase field simulations

  13. IMPROVING NEAREST NEIGHBOUR SEARCH IN 3D SPATIAL ACCESS METHOD

    Directory of Open Access Journals (Sweden)

    A. Suhaibaha

    2016-10-01

    Full Text Available Nearest Neighbour (NN is one of the important queries and analyses for spatial application. In normal practice, spatial access method structure is used during the Nearest Neighbour query execution to retrieve information from the database. However, most of the spatial access method structures are still facing with unresolved issues such as overlapping among nodes and repetitive data entry. This situation will perform an excessive Input/Output (IO operation which is inefficient for data retrieval. The situation will become more crucial while dealing with 3D data. The size of 3D data is usually large due to its detail geometry and other attached information. In this research, a clustered 3D hierarchical structure is introduced as a 3D spatial access method structure. The structure is expected to improve the retrieval of Nearest Neighbour information for 3D objects. Several tests are performed in answering Single Nearest Neighbour search and k Nearest Neighbour (kNN search. The tests indicate that clustered hierarchical structure is efficient in handling Nearest Neighbour query compared to its competitor. From the results, clustered hierarchical structure reduced the repetitive data entry and the accessed page. The proposed structure also produced minimal Input/Output operation. The query response time is also outperformed compared to the other competitor. For future outlook of this research several possible applications are discussed and summarized.

  14. New procedure for criticality search using coarse mesh nodal methods

    International Nuclear Information System (INIS)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S.

    2011-01-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  15. New procedure for criticality search using coarse mesh nodal methods

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S., E-mail: wneto@con.ufrj.b, E-mail: fernando@con.ufrj.b, E-mail: Aquilino@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  16. Exact extraction method for road rutting laser lines

    Science.gov (United States)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  17. Two Methods for Antialiased Wireframe Drawing with Hidden Line Removal

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Munk-Lund, Steen; Gjøl, Mikkel

    2008-01-01

    Two novel and robust techniques for wireframe drawing are proposed. Neither suffer from the well-known artifacts associated with the standard two pass, offset based techniques for wireframe drawing. Both methods draw prefiltered lines and produce high-quality antialiased results without super...

  18. Moduli Dark Matter and the Search for Its Decay Line using Suzaku X-Ray Telescope

    Science.gov (United States)

    Kusenko, Alexander; Loewenstein, Michael; Yanagida, Tsutomu T.

    2013-01-01

    Light scalar fields called moduli arise from a variety of different models involving supersymmetry and/or string theory; thus their existence is a generic prediction of leading theories for physics beyond the standard model. They also present a formidable, long-standing problem for cosmology. We argue that an anthropic solution to the moduli problem exists in the case of small moduli masses and that it automatically leads to dark matter in the form of moduli. The recent discovery of the 125 GeV Higgs boson implies a lower bound on the moduli mass of about a keV. This form of dark matter is consistent with the observed properties of structure formation, and it is amenable to detection with the help of x-ray telescopes. We present the results of a search for such dark matter particles using spectra extracted from the first deep x-ray observations of the Draco and Ursa Minor dwarf spheroidal galaxies, which are darkmatter- dominated systems with extreme mass-to-light ratios and low intrinsic backgrounds. No emission line is positively detected, and we set new constraints on the relevant new physics.

  19. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  20. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  1. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  2. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  3. Development of Pulsar Detection Methods for a Galactic Center Search

    Science.gov (United States)

    Thornton, Stephen; Wharton, Robert; Cordes, James; Chatterjee, Shami

    2018-01-01

    Finding pulsars within the inner parsec of the galactic center would be incredibly beneficial: for pulsars sufficiently close to Sagittarius A*, extremely precise tests of general relativity in the strong field regime could be performed through measurement of post-Keplerian parameters. Binary pulsar systems with sufficiently short orbital periods could provide the same laboratories with which to test existing theories. Fast and efficient methods are needed to parse large sets of time-domain data from different telescopes to search for periodicity in signals and differentiate radio frequency interference (RFI) from pulsar signals. Here we demonstrate several techniques to reduce red noise (low-frequency interference), generate signals from pulsars in binary orbits, and create plots that allow for fast detection of both RFI and pulsars.

  4. The Search Conference as a Method in Planning Community Health Promotion Actions

    Science.gov (United States)

    Magnus, Eva; Knudtsen, Margunn Skjei; Wist, Guri; Weiss, Daniel; Lillefjell, Monica

    2016-01-01

    Aims: The aim of this article is to describe and discuss how the search conference can be used as a method for planning health promotion actions in local communities. Design and methods: The article draws on experiences with using the method for an innovative project in health promotion in three Norwegian municipalities. The method is described both in general and how it was specifically adopted for the project. Results and conclusions: The search conference as a method was used to develop evidence-based health promotion action plans. With its use of both bottom-up and top-down approaches, this method is a relevant strategy for involving a community in the planning stages of health promotion actions in line with political expectations of participation, ownership, and evidence-based initiatives. Significance for public health This article describe and discuss how the Search conference can be used as a method when working with knowledge based health promotion actions in local communities. The article describe the sequences of the conference and shows how this have been adapted when planning and prioritizing health promotion actions in three Norwegian municipalities. The significance of the article is that it shows how central elements in the planning of health promotion actions, as participation and involvements as well as evidence was a fundamental thinking in how the conference were accomplished. The article continue discussing how the method function as both a top-down and a bottom-up strategy, and in what way working evidence based can be in conflict with a bottom-up strategy. The experiences described can be used as guidance planning knowledge based health promotion actions in communities. PMID:27747199

  5. Analytical methods for study of transmission line lightning protection

    International Nuclear Information System (INIS)

    Pettersson, Per.

    1993-04-01

    Transmission line lightning performance is studied by analytical methods. The elements of shielding failure flashovers and back-flashovers are analysed as functions of incidence, response and insulation. Closed-form approximate expressions are sought to enhance understanding of the phenomena. Probabilistic and wave propagation aspects are particularly studied. The electrogeometric model of lightning attraction to structures is used in combination with the log-normal probability distribution of lightning to ground currents. The log-normality is found to be retained for the currents collected by mast-type as well as line-type structures, but with a change of scale. For both types, exceedingly simple formulas for the number of hits are derived. Simple closed-form expressions for the line outage rates from back- flashovers and shielding failure flashovers are derived in a uniform way as functions of the critical currents. The expressions involve the standardized normal distribution function. System response is analysed by use of Laplace transforms in combination with text-book transmission-line theory. Inversion into time domain is accomplished by an approximate asymptotic method producing closed-form results. The back-flashover problem is analysed in particular. Approximate, image type expressions are derived for shunt admittance of wires above, on and under ground for analyses of fast transients. The derivation parallels that for series impedance, now well-known. 3 refs, 5 figs

  6. Numerical Methods for Plate Forming by Line Heating

    DEFF Research Database (Denmark)

    Clausen, Henrik Bisgaard

    2000-01-01

    Line heating is the process of forming originally flat plates into a desired shape by means of heat treatment. Parameter studies are carried out on a finite element model to provide knowledge of how the process behaves with varying heating conditions. For verification purposes, experiments are ca...... are carried out; one set of experiments investigates the actual heat flux distribution from a gas torch and another verifies the validty of the FE calculations. Finally, a method to predict the heating pattern is described....

  7. The Method of Lines for Ternary Diffusion Problems

    Directory of Open Access Journals (Sweden)

    Henryk Leszczyński

    2014-01-01

    Full Text Available The method of lines (MOL for diffusion equations with Neumann boundary conditions is considered. These equations are transformed by a discretization in space variables into systems of ordinary differential equations. The proposed ODEs satisfy the mass conservation law. The stability of solutions of these ODEs with respect to discrete L2 norms and discrete W1,∞ norms is investigated. Numerical examples confirm the parabolic behaviour of this model and very regular dynamics.

  8. Search for gamma-ray spectral lines with the Fermi Large Area Telescope and dark matter implications

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Bloom, E. D.; Bonamente, E.; Bottacini, E.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chaves, R. C. G.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; D’Ammando, F.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Di Venere, L.; Drell, P. S.; Drlica-Wagner, A.; Essig, R.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hayashida, M.; Hill, A. B.; Horan, D.; Hou, X.; Hughes, R. E.; Inoue, Y.; Izaguirre, E.; Jogler, T.; Kamae, T.; Knödlseder, J.; Kuss, M.; Lande, J.; Larsson, S.; Latronico, L.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Malyshev, D.; Mayer, M.; Mazziotta, M. N.; McEnery, J. E.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nakamori, T.; Nemmen, R.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Paneque, D.; Perkins, J. S.; Pesce-Rollins, M.; Piron, F.; Pivato, G.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Romani, R. W.; Sánchez-Conde, M.; Schulz, A.; Sgrò, C.; Siegal-Gaskins, J.; Siskind, E. J.; Snyder, A.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Takahashi, H.; Thayer, J. G.; Thayer, J. B.; Tibaldo, L.; Tinivella, M.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Zaharijas, G.; Zimmer, S.

    2013-10-22

    Weakly interacting massive particles (WIMPs) are a theoretical class of particles that are excellent dark matter candidates. WIMP annihilation or decay may produce essentially monochromatic γ rays detectable by the Fermi Large Area Telescope (LAT) against the astrophysical γ -ray emission of the Galaxy. We have searched for spectral lines in the energy range 5–300 GeV using 3.7 years of data, reprocessed with updated instrument calibrations and an improved energy dispersion model compared to the previous Fermi-LAT Collaboration line searches. We searched in five regions selected to optimize sensitivity to different theoretically motivated dark matter density distributions. We did not find any globally significant lines in our a priori search regions and present 95% confidence limits for annihilation cross sections of self-conjugate WIMPs and decay lifetimes. Our most significant fit occurred at 133 GeV in our smallest search region and had a local significance of 3.3 standard deviations, which translates to a global significance of 1.5 standard deviations. We discuss potential systematic effects in this search, and examine the feature at 133 GeV in detail. We find that the use both of reprocessed data and of additional information in the energy dispersion model contributes to the reduction in significance of the linelike feature near 130 GeV relative to significances reported in other works. We also find that the feature is narrower than the LAT energy resolution at the level of 2 to 3 standard deviations, which somewhat disfavors the interpretation of the 133 GeV feature as a real WIMP signal.

  9. In search of new methods. Qigong in stuttering therapy

    Directory of Open Access Journals (Sweden)

    Paweł Półrola

    2013-10-01

    Full Text Available Introduction : Even though stuttering is probably as old a phenomenon as the human speech itself, the stuttering therapy is still a challenge for the therapist and requires constant searching for new methods. Qigong may prove to be one of them. Aim of the research: The research paper presents the results of an experimental investigation evaluating the usefulness of qigong practice in stuttering therapy. Material and methods: Two groups of stuttering adults underwent 6-month therapy. In group I – the experimental one (n = 11 – the therapy consisted of speech fluency training, psychotherapy and qigong practice. In group II – the control one (n = 12 – it included speech fluency training and psychotherapy. In both groups 2-hour sessions of speech fluency training and psychotherapy were conducted twice a week. Two-hour qigong sessions took place once a week. Results: After 6 months the therapy results were compared with regard to the basic stuttering parameters, such as the degree of speech disfluency, the level of logophobia and speech disfluency symptoms. Improvement was observed in both groups, the beneficial effects, however, being more prominent in the qigong-practising group. Conclusions : Qigong exercises used in the therapy of stuttering people along with speech fluency training and psychotherapy give beneficial effects.

  10. On-line reconstruction of in-core power distribution by harmonics expansion method

    International Nuclear Information System (INIS)

    Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping

    2011-01-01

    Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.

  11. Design of an On-Line Query Language for Full Text Patent Search.

    Science.gov (United States)

    Glantz, Richard S.

    The design of an English-like query language and an interactive computer environment for searching the full text of the U.S. patent collection are discussed. Special attention is paid to achieving a transparent user interface, to providing extremely broad search capabilities (including nested substitution classes, Kleene star events, and domain…

  12. Encounters with the OPAC: On-Line Searching in Public Libraries.

    Science.gov (United States)

    Slone, Deborah J.

    2000-01-01

    Reports on a qualitative study that explored strategies and behaviors of public library users during interaction with an online public access catalog, and users' confidence in finding needed information online. Discusses results of questionnaires, interviews, and observations that examined unknown-item searches, area searches, and known-item…

  13. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    OpenAIRE

    Zheluk, Andrey; Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's a...

  14. A search for the millimetre lines of HCN in Comets Wilson 1987 VII and Machholz 1988 XV

    Science.gov (United States)

    Crouvisier, J.; Despois, D.; Bockelee-Morvan, D.; Gerard, E.; Paubert, G.; Johansson, L. E. B.; Ekelund, L.; Winnberg, A.; Ge, W.; Irvine, W. M.; Kinzel, W. M.; Schloerb, F. P.

    1990-08-01

    The J(1-0) lines of HCN at 89 GHz were searched for in Comet Wilson 1987 VII, with the FCRAO, the SEST and the IRAM radio telescopes between February and June 1987. There was no firm detection, but significant upper limits were obtained, which put severe constraints on the HCN production rate in that comet. A direct comparison with the observations of P/Halley suggests that the HCN abundance relative to water might be smaller in Comet Wilson by at least a factor of two. The J(1-0) and J(3-2) lines of HCN at 89 and 266 GHz were searched for in Comet Machholz 1988 XV when it was close to perihelion at 0.17 AU from the sun. There was no detection. At that moment, the comet was probably no longer active.

  15. Convergence of method of lines approximations to partial differential equations

    International Nuclear Information System (INIS)

    Verwer, J.G.; Sanz-Serna, J.M.

    1984-01-01

    Many existing numerical schemes for evolutionary problems in partial differential equations (PDEs) can be viewed as method of lines (MOL) schemes. This paper treats the convergence of one-step MOL schemes. The main purpose is to set up a general framework for a convergence analysis applicable to nonlinear problems. The stability materials for this framework are taken from the field of nonlinear stiff ODEs. In this connection, important concepts are the logarithmic matrix norm and C-stability. A nonlinear parabolic equation and the cubic Schroedinger equation are used for illustrating the ideas. (Auth.)

  16. A modified harmony search based method for optimal rural radial ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 3 (2010) >. Log in or Register to get access to full text downloads.

  17. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    Science.gov (United States)

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and

  18. Searching for truth: internet search patterns as a method of investigating online responses to a Russian illicit drug policy debate.

    Science.gov (United States)

    Zheluk, Andrey; Gillespie, James A; Quinn, Casey

    2012-12-13

    This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r(s) = 0.88, P < .001), "Bychkov" (r(s) = .78, P < .001) and "Khimki"(r(s) = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for "Bychkov" and 48,084 for "Egor Bychkov", compared to 53

  19. FBILI method for multi-level line transfer

    Science.gov (United States)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  20. A human-machine interface evaluation method: A difficulty evaluation method in information searching (DEMIS)

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2009-01-01

    A human-machine interface (HMI) evaluation method, which is named 'difficulty evaluation method in information searching (DEMIS)', is proposed and demonstrated with an experimental study. The DEMIS is based on a human performance model and two measures of attentional-resource effectiveness in monitoring and detection tasks in nuclear power plants (NPPs). Operator competence and HMI design are modeled to be most significant factors to human performance. One of the two effectiveness measures is fixation-to-importance ratio (FIR) which represents attentional resource (eye fixations) spent on an information source compared to importance of the information source. The other measure is selective attention effectiveness (SAE) which incorporates FIRs for all information sources. The underlying principle of the measures is that the information source should be selectively attended to according to its informational importance. In this study, poor performance in information searching tasks is modeled to be coupled with difficulties caused by poor mental models of operators or/and poor HMI design. Human performance in information searching tasks is evaluated by analyzing the FIR and the SAE. Operator mental models are evaluated by a questionnaire-based method. Then difficulties caused by a poor HMI design are evaluated by a focused interview based on the FIR evaluation and then root causes leading to poor performance are identified in a systematic way.

  1. Inverse atmospheric radiative transfer problems - A nonlinear minimization search method of solution. [aerosol pollution monitoring

    Science.gov (United States)

    Fymat, A. L.

    1976-01-01

    The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.

  2. On-line biomedical databases-the best source for quick search of the scientific information in the biomedicine.

    Science.gov (United States)

    Masic, Izet; Milinovic, Katarina

    2012-06-01

    Most of medical journals now has it's electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals.

  3. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  4. Searching methods for biometric identification systems: Fundamental limits

    NARCIS (Netherlands)

    Willems, F.M.J.

    2009-01-01

    We study two-stage search procedures for biometric identification systems in an information-theoretical setting. Our main conclusion is that clustering based on vector-quantization achieves the optimum trade-off between the number of clusters (cluster rate) and the number of individuals within a

  5. Searching for the 3.5 keV Line in the Stacked Suzaku Observations of Galaxy Clusters

    Science.gov (United States)

    Bulbul, Esra; Markevitch, Maxim; Foster, Adam; Miller, Eric; Bautz, Mark; Lowenstein, Mike; Randall, Scott W.; Smith, Randall K.

    2016-01-01

    We perform a detailed study of the stacked Suzaku observations of 47 galaxy clusters, spanning a redshift range of 0.01-0.45, to search for the unidentified 3.5 keV line. This sample provides an independent test for the previously detected line. We detect a 2sigma-significant spectral feature at 3.5 keV in the spectrum of the full sample. When the sample is divided into two subsamples (cool-core and non-cool core clusters), the cool-core subsample shows no statistically significant positive residuals at the line energy. A very weak (approx. 2sigma confidence) spectral feature at 3.5 keV is permitted by the data from the non-cool-core clusters sample. The upper limit on a neutrino decay mixing angle of sin(sup 2)(2theta) = 6.1 x 10(exp -11) from the full Suzaku sample is consistent with the previous detections in the stacked XMM-Newton sample of galaxy clusters (which had a higher statistical sensitivity to faint lines), M31, and Galactic center, at a 90% confidence level. However, the constraint from the present sample, which does not include the Perseus cluster, is in tension with previously reported line flux observed in the core of the Perseus cluster with XMM-Newton and Suzaku.

  6. Interferometric measurement of lines shift in flames in connection with interpretation of lined absorption method in atomic absorption spectroscopy

    International Nuclear Information System (INIS)

    L'vov, B.V.; Polzik, L.K.; Katskov, D.A.; Kruglikova, L.P.

    1975-01-01

    This paper is concerned with interferometric measuring of the line shift in flames in the view of interpretation of absorption lines in the atomic absorption spectroscopy. The newly measured line shifts were compared to the known data on Lorentz broadening of the same lines obtained by methods free of the systematic errors. The resonant lines of the alkaline earth elements (Sr, Ca, Ba) were investigated. To reduce self-absorption in the flame the solutions with minimum concentrations of the elements were used. The computation scheme includes the spectrometer apparatus width and line broadening due to the self-absorption. Formulae are given for computing the values studied. Good agreement was observed between the computed and experimental results. Error analysis was performed. It was concluded that any line shifts in the hydrocarbons were correctly taken into an account in the absolute computations of absorption

  7. Searching for the 3.5 keV Line in the Deep Fields with Chandra: The 10 Ms Observations

    Science.gov (United States)

    Cappelluti, Nico; Bulbul, Esra; Foster, Adam; Natarajan, Priyamvada; Urry, Megan C.; Bautz, Mark W.; Civano, Francesca; Miller, Eric; Smith, Randall K.

    2018-02-01

    We report a systematic search for an emission line around 3.5 keV in the spectrum of the cosmic X-ray background using a total of ∼10 Ms Chandra observations toward the COSMOS Legacy and Extended Chandra Deep Field South survey fields. We find marginal evidence of a feature at an energy of ∼3.51 keV with a significance of 2.5–3σ, depending on the choice of statistical treatment. The line intensity is best fit at (8.8 ± 2.9) × 10‑7 ph cm‑2 s‑1 when using a simple Δχ 2 or {10.2}-0.4+0.2× {10}-7 ph cm‑2 s‑1 when Markov chain Monte Carlo is used. Based on our knowledge of Chandra and the reported detection of the line by other instruments, an instrumental origin for the line remains unlikely. We cannot, however, rule out a statistical fluctuation, and in that case our results provide a 3σ upper limit at 1.85 × 10‑6 ph cm‑2 s‑1. We discuss the interpretation of this observed line in terms of the iron line background, S XVI charge exchange, as well as potentially being from sterile neutrino decay. We note that our detection is consistent with previous measurements of this line toward the Galactic center and can be modeled as the result of sterile neutrino decay from the Milky Way for the dark matter distribution modeled as a Navarro–Frenk–White profile. For this case, we estimate a mass m ν ∼ 7.01 keV and a mixing angle sin2(2θ) = (0.83–2.75) × 10‑10. These derived values are in agreement with independent estimates from galaxy clusters, the Galactic center, and M31.

  8. Fermi LAT Search for Dark Matter in Gamma-Ray Lines and the Inclusive Photon Spectrum

    Science.gov (United States)

    Ackermann, M.; Ajello, M.; Albert, A.; Baldini, L.; Barbiellini, G.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; hide

    2012-01-01

    Dark matter particle annihilation or decay can produce monochromatic gamma-ray lines and contribute to the diffuse gamma-ray background. Flux upper limits are presented for gamma-ray spectral lines from 7 to 200 GeV and for the diffuse gamma-ray background from 4.8 GeV to 264 GeV obtained from two years of Fermi Large Area Telescope data integrated over most of the sky. We give cross section upper limits and decay lifetime lower limits for dark matter models that produce gamma-ray lines or contribute to the diffuse spectrum, including models proposed as explanations of the PAMELA and Fermi cosmic-ray data.

  9. Evidence for methane in orion KL: A search for the 4.6 Gigahertz line

    International Nuclear Information System (INIS)

    Wilson, T.L.; Snyder, L.E.

    1985-01-01

    A sensitive search for the J = 11 E(2)-E(1) transition of interstellar methane (CH 4 ) has resulted in a peak upper limit which is much less than the value reported by Fox and Jennings in 1978. When combined with the negative results reported by Ellder et al. in 1980, these data rule out the detection of CH 4 in Orion KL previously claimed by Fox and Jennings

  10. Evidence for methane in Orion KL - a search for the 4.6 gigahertz line

    International Nuclear Information System (INIS)

    Wilson, T.L.; Snyder, L.E.

    1985-01-01

    A sensitive search for J = 11 E(2)-E(1) transition of interstellar methane (CH4) has resulted in a peak upper limit which is much less than the value reported by Fox and Jennings (1978). When combined with the negative results reported by Ellder et al. (1980), these data rule out the detection of CH4 in Orion KL previously claimed by Fox and Jennings. 7 references

  11. A study of certain Monte Carlo search and optimisation methods

    International Nuclear Information System (INIS)

    Budd, C.

    1984-11-01

    Studies are described which might lead to the development of a search and optimisation facility for the Monte Carlo criticality code MONK. The facility envisaged could be used to maximise a function of k-effective with respect to certain parameters of the system or, alternatively, to find the system (in a given range of systems) for which that function takes a given value. (UK)

  12. The method of lines solution of discrete ordinates method for non-grey media

    International Nuclear Information System (INIS)

    Cayan, Fatma Nihan; Selcuk, Nevin

    2007-01-01

    A radiation code based on method of lines (MOL) solution of discrete ordinates method (DOM) for radiative heat transfer in non-grey absorbing-emitting media was developed by incorporation of a gas spectral radiative property model, namely wide band correlated-k (WBCK) model, which is compatible with MOL solution of DOM. Predictive accuracy of the code was evaluated by applying it to 1-D parallel plate and 2-D axisymmetric cylindrical enclosure problems containing absorbing-emitting medium and benchmarking its predictions against line-by-line solutions available in the literature. Comparisons reveal that MOL solution of DOM with WBCK model produces accurate results for radiative heat fluxes and source terms and can be used with confidence in conjunction with computational fluid dynamics codes based on the same approach

  13. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  14. Distinguishing transient signals and instrumental disturbances in semi-coherent searches for continuous gravitational waves with line-robust statistics

    International Nuclear Information System (INIS)

    Keitel, David

    2016-01-01

    Non-axisymmetries in rotating neutron stars emit quasi-monochromatic gravitational waves. These long-duration ‘continuous wave’ signals are among the main search targets of ground-based interferometric detectors. However, standard detection methods are susceptible to false alarms from instrumental artefacts that resemble a continuous-wave signal. Past work [Keitel, Prix, Papa, Leaci and Siddiqi 2014, Phys. Rev. D 89 064023] showed that a Bayesian approach, based on an explicit model of persistent single-detector disturbances, improves robustness against such artefacts. Since many strong outliers in semi-coherent searches of LIGO data are caused by transient disturbances that last only a few hours or days, I describe in a recent paper [Keitel D 2015, LIGO-P1500159] how to extend this approach to cover transient disturbances, and demonstrate increased sensitivity in realistic simulated data. Additionally, neutron stars could emit transient signals which, for a limited time, also follow the continuous-wave signal model. As a pragmatic alternative to specialized transient searches, I demonstrate how to make standard semi-coherent continuous-wave searches more sensitive to transient signals. Focusing on the time-scale of a single segment in the semi-coherent search, Bayesian model selection yields a simple detection statistic without a significant increase in computational cost. This proceedings contribution gives a brief overview of both works. (paper)

  15. Six Sigma methods applied to cryogenic coolers assembly line

    Science.gov (United States)

    Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René

    2009-05-01

    Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.

  16. Axial and Centrifugal Compressor Mean Line Flow Analysis Method

    Science.gov (United States)

    Veres, Joseph P.

    2009-01-01

    This paper describes a method to estimate key aerodynamic parameters of single and multistage axial and centrifugal compressors. This mean-line compressor code COMDES provides the capability of sizing single and multistage compressors quickly during the conceptual design process. Based on the compressible fluid flow equations and the Euler equation, the code can estimate rotor inlet and exit blade angles when run in the design mode. The design point rotor efficiency and stator losses are inputs to the code, and are modeled at off design. When run in the off-design analysis mode, it can be used to generate performance maps based on simple models for losses due to rotor incidence and inlet guide vane reset angle. The code can provide an improved understanding of basic aerodynamic parameters such as diffusion factor, loading levels and incidence, when matching multistage compressor blade rows at design and at part-speed operation. Rotor loading levels and relative velocity ratio are correlated to the onset of compressor surge. NASA Stage 37 and the three-stage NASA 74-A axial compressors were analyzed and the results compared to test data. The code has been used to generate the performance map for the NASA 76-B three-stage axial compressor featuring variable geometry. The compressor stages were aerodynamically matched at off-design speeds by adjusting the variable inlet guide vane and variable stator geometry angles to control the rotor diffusion factor and incidence angles.

  17. A Method for Efficient Searching at Online Shopping

    Science.gov (United States)

    Sanjo, Tomomi; Nagata, Moiro

    In recent years, online shopping has been popularized. However, the users can not find efficiently their items at on-line markets. This paper proposes an engine to find items easily at the online market. This engine has the following facilities. First, it presents information in a fixed format. Second, the user can find items by selected keywords. Third, it presents only necessary information by using his/her history. Finally, it has a customize function for each user. Moreover, the system asks the users to down load a page of recommended items. We show the effectives of our proposal with some experiments.

  18. In vitro detection of circulating tumor cells compared by the CytoTrack and CellSearch methods

    DEFF Research Database (Denmark)

    Hillig, T.; Horn, P.; Nygaard, Ann-Britt

    2015-01-01

    .23/p = 0.09). Overall, the recovery of CytoTrack and CellSearch was 68.8 +/- 3.9 %/71.1 +/- 2.9 %, respectively (p = 0.58). In spite of different methodologies, CytoTrack and CellSearch found similar number of CTCs, when spiking was performed with the EpCAM and pan cytokeratin-positive cell line MCF-7......Comparison of two methods to detect circulating tumor cells (CTC) CytoTrack and CellSearch through recovery of MCF-7 breast cancer cells, spiked into blood collected from healthy donors. Spiking of a fixed number of EpCAM and pan-cytokeratin positive MCF-7 cells into 7.5 mL donor blood...... was performed by FACSAria flow sorting. The samples were shipped to either CytoTrack or CellSearch research facilities within 48 h, where evaluation of MCF-7 recovery was performed. CytoTrack and CellSearch analyses were performed simultaneously. Recoveries of MCF-7 single cells, cells in clusters, and clusters...

  19. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges.

    Science.gov (United States)

    Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James

    2017-09-01

    Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Search for new and improved radiolabeling methods for monoclonal antibodies

    International Nuclear Information System (INIS)

    Hiltunen, J.V.

    1993-01-01

    In this review the selection of different radioisotopes is discussed as well as the various traditional or newer methods to introduce the radiolabel into the antibody structure. Labeling methods for radiohalogens, for technetium and rhenium isotopes, and for 3-valent cation radiometals are reviewed. Some of the newer methods offer simplified labeling procedures, but usually the new methods are more complicated than the earlier ones. However, new labeling methods are available for almost any radioelement group and they may result in better preserved original natural of the antibody and lead to better clinical results. (orig./MG)

  1. Modeling of hydrogen Stark line shapes with kinetic theory methods

    Science.gov (United States)

    Rosato, J.; Capes, H.; Stamm, R.

    2012-12-01

    The unified formalism for Stark line shapes is revisited and extended to non-binary interactions between an emitter and the surrounding perturbers. The accuracy of this theory is examined through comparisons with ab initio numerical simulations.

  2. BATSE gamma-ray burst line search. 2: Bayesian consistency methodology

    Science.gov (United States)

    Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.

    1994-01-01

    We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.

  3. Search for Efficient Foreground Subtraction Method in 21cm Cosmology

    Science.gov (United States)

    Datta, Abhirup; Choudhury, Madhurima; Chakraborty, Arnab

    2017-06-01

    Observations of the HI 21 cm transition line promises to be an important probe into the cosmic Dark Ages and Epoch of Reionization. Detection of this redshifted 21 cm signal is one of the key science goal for several upcoming and future low frequency radio telescopes like Hydrogen Epoch of Reionization Array (HERA), Square Kilometer Array (SKA) and Dark Ages Radio Explorer (DARE). One of the challenges for the detection of this signal is the accuracy of the foreground source removal. Several novel techniques have been explored already to remove bright foregrounds from both interferometric as well as total power experiments. Here, we present preliminary results from our investigation on application of Artificial Neural Networks to detect faint 21cm global signal amidst the sea of bright galactic foreground.

  4. Program for searching for semiempirical parameters by the MNDO method

    International Nuclear Information System (INIS)

    Bliznyuk, A.A.; Voityuk, A.A.

    1987-01-01

    The authors describe an program for optimizing atomic models constructed using the MNDO method which varies not only the parameters but also the scope for simple changes in the calculation scheme. The target function determines properties such as formation enthalpies, dipole moments, ionization potentials, and geometrical parameters. Software used to minimize the target function is based on the simplex method on the Nelder-Mead algorithm and on the Fletcher variable-metric method. The program is written in FORTRAN IV and implemented on the ES computer

  5. Search for γ -Ray Line Signals from Dark Matter Annihilations in the Inner Galactic Halo from 10 Years of Observations with H.E.S.S.

    Science.gov (United States)

    Abdallah, H.; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Angüner, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernlöhr, K.; Blackwell, R.; Böttcher, M.; Boisson, C.; Bolmont, J.; Bonnefoy, S.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Büchele, M.; Bulik, T.; Capasso, M.; Caroff, S.; Carosi, A.; Carr, J.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Colafrancesco, S.; Condon, B.; Conrad, J.; Davids, I. D.; Decock, J.; Deil, C.; Devin, J.; deWilt, P.; Dirson, L.; Djannati-Ataï, A.; Domainko, W.; Donath, A.; Drury, L. O'C.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Emery, G.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Förster, A.; Funk, S.; Füßling, M.; Gabici, S.; Gallant, Y. A.; Garrigoux, T.; Gaté, F.; Giavitto, G.; Giebels, B.; Glawion, D.; Glicenstein, J. F.; Gottschall, D.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holch, T. L.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Janiak, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzyński, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khélifi, B.; King, J.; Klepser, S.; Klochkov, D.; Kluźniak, W.; Komin, Nu.; Kosack, K.; Krakau, S.; Kraus, M.; Krüger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lemière, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Liu, R.; Lohse, T.; Lorentz, M.; López-Coto, R.; Lypova, I.; Malyshev, D.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Morâ, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Ndiyavala, H.; Niederwanger, F.; Niemiec, J.; Oakes, L.; O'Brien, P.; Odaka, H.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poireau, V.; Poon, H.; Prokhorov, D.; Prokoph, H.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Rauth, R.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Rinchiuso, L.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Saito, S.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schandri, M.; Schlickeiser, R.; Schüssler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Shiningayamwe, K.; Simoni, R.; Sol, H.; Spanier, F.; Spir-Jacob, M.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Steppa, C.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsirou, M.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der Walt, J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Völk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Wörnlein, A.; Wouters, D.; Yang, R.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Zorn, J.; Żywucka, N.; H. E. S. S. Collaboration

    2018-05-01

    Spectral lines are among the most powerful signatures for dark matter (DM) annihilation searches in very-high-energy γ rays. The central region of the Milky Way halo is one of the most promising targets given its large amount of DM and proximity to Earth. We report on a search for a monoenergetic spectral line from self-annihilations of DM particles in the energy range from 300 GeV to 70 TeV using a two-dimensional maximum likelihood method taking advantage of both the spectral and spatial features of the signal versus background. The analysis makes use of Galactic center observations accumulated over ten years (2004-2014) with the H.E.S.S. array of ground-based Cherenkov telescopes. No significant γ -ray excess above the background is found. We derive upper limits on the annihilation cross section ⟨σ v ⟩ for monoenergetic DM lines at the level of 4 ×10-28 cm3 s-1 at 1 TeV, assuming an Einasto DM profile for the Milky Way halo. For a DM mass of 1 TeV, they improve over the previous ones by a factor of 6. The present constraints are the strongest obtained so far for DM particles in the mass range 300 GeV-70 TeV. Ground-based γ -ray observations have reached sufficient sensitivity to explore relevant velocity-averaged cross sections for DM annihilation into two γ -ray photons at the level expected from the thermal relic density for TeV DM particles.

  6. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai; Chen, Zonghai; Liu, Chengfei; Huang, He; Zhang, Xiangliang

    2016-01-01

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate

  7. Search and foraging behaviors from movement data: A comparison of methods.

    Science.gov (United States)

    Bennison, Ashley; Bearhop, Stuart; Bodey, Thomas W; Votier, Stephen C; Grecian, W James; Wakefield, Ewan D; Hamer, Keith C; Jessopp, Mark

    2018-01-01

    Search behavior is often used as a proxy for foraging effort within studies of animal movement, despite it being only one part of the foraging process, which also includes prey capture. While methods for validating prey capture exist, many studies rely solely on behavioral annotation of animal movement data to identify search and infer prey capture attempts. However, the degree to which search correlates with prey capture is largely untested. This study applied seven behavioral annotation methods to identify search behavior from GPS tracks of northern gannets ( Morus bassanus ), and compared outputs to the occurrence of dives recorded by simultaneously deployed time-depth recorders. We tested how behavioral annotation methods vary in their ability to identify search behavior leading to dive events. There was considerable variation in the number of dives occurring within search areas across methods. Hidden Markov models proved to be the most successful, with 81% of all dives occurring within areas identified as search. k -Means clustering and first passage time had the highest rates of dives occurring outside identified search behavior. First passage time and hidden Markov models had the lowest rates of false positives, identifying fewer search areas with no dives. All behavioral annotation methods had advantages and drawbacks in terms of the complexity of analysis and ability to reflect prey capture events while minimizing the number of false positives and false negatives. We used these results, with consideration of analytical difficulty, to provide advice on the most appropriate methods for use where prey capture behavior is not available. This study highlights a need to critically assess and carefully choose a behavioral annotation method suitable for the research question being addressed, or resulting species management frameworks established.

  8. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis.

    Science.gov (United States)

    Huang, Yan Xin; Bao, Yong Li; Guo, Shu Yan; Wang, Yan; Zhou, Chun Guang; Li, Yu Xin

    2008-12-16

    The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb) in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes), Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation coefficient (MCC), sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be superior to that of other similar programs

  9. Short Term Gain, Long Term Pain:Informal Job Search Methods and Post-Displacement Outcomes

    OpenAIRE

    Green, Colin

    2012-01-01

    This paper examines the role of informal job search methods on the labour market outcomes of displaced workers. Informal job search methods could alleviate short-term labour market difficulties of displaced workers by providing information on job opportunities, allowing them to signal their productivity and may mitigate wage losses through better post-displacement job matching. However if displacement results from reductions in demand for specific sectors/skills, the use of informal job searc...

  10. Wide Binaries in TGAS: Search Method and First Results

    Science.gov (United States)

    Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.

    2018-04-01

    Half of all stars reside in binary systems, many of which have orbital separations in excess of 1000 AU. Such binaries are typically identified in astrometric catalogs by matching the proper motions vectors of close stellar pairs. We present a fully Bayesian method that properly takes into account positions, proper motions, parallaxes, and their correlated uncertainties to identify widely separated stellar binaries. After applying our method to the >2 × 106 stars in the Tycho-Gaia astrometric solution from Gaia DR1, we identify over 6000 candidate wide binaries. For those pairs with separations less than 40,000 AU, we determine the contamination rate to be ~5%. This sample has an orbital separation (a) distribution that is roughly flat in log space for separations less than ~5000 AU and follows a power law of a -1.6 at larger separations.

  11. LINES

    Directory of Open Access Journals (Sweden)

    Minas Bakalchev

    2015-10-01

    Full Text Available The perception of elements in a system often creates their interdependence, interconditionality, and suppression. The lines from a basic geometrical element have become the model of a reductive world based on isolation according to certain criteria such as function, structure, and social organization. Their traces are experienced in the contemporary world as fragments or ruins of a system of domination of an assumed hierarchical unity. How can one release oneself from such dependence or determinism? How can the lines become less “systematic” and forms more autonomous, and less reductive? How is a form released from modernistic determinism on the new controversial ground? How can these elements or forms of representation become forms of action in the present complex world? In this paper, the meaning of lines through the ideas of Le Corbusier, Leonidov, Picasso, and Hitchcock is presented. Spatial research was made through a series of examples arising from the projects of the architectural studio “Residential Transformations”, which was a backbone for mapping the possibilities ranging from playfulness to exactness, as tactics of transformation in the different contexts of the contemporary world.

  12. Searching for Suicide Methods: Accessibility of Information About Helium as a Method of Suicide on the Internet.

    Science.gov (United States)

    Gunnell, David; Derges, Jane; Chang, Shu-Sen; Biddle, Lucy

    2015-01-01

    Helium gas suicides have increased in England and Wales; easy-to-access descriptions of this method on the Internet may have contributed to this rise. To investigate the availability of information on using helium as a method of suicide and trends in searching about this method on the Internet. We analyzed trends in (a) Google searching (2004-2014) and (b) hits on a Wikipedia article describing helium as a method of suicide (2013-2014). We also investigated the extent to which helium was described as a method of suicide on web pages and discussion forums identified via Google. We found no evidence of rises in Internet searching about suicide using helium. News stories about helium suicides were associated with increased search activity. The Wikipedia article may have been temporarily altered to increase awareness of suicide using helium around the time of a celebrity suicide. Approximately one third of the links retrieved using Google searches for suicide methods mentioned helium. Information about helium as a suicide method is readily available on the Internet; the Wikipedia article describing its use was highly accessed following celebrity suicides. Availability of online information about this method may contribute to rises in helium suicides.

  13. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  14. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai

    2016-09-09

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate suggestions individually and return the top-k best of them. However, the top-k suggestions have high redundancy with respect to the topics. To provide informative suggestions, the returned k suggestions are expected to be diverse, i.e., maximizing the relevance to the user query and the diversity with respect to topics that the user might be interested in simultaneously. In this paper, an objective function considering both factors is defined for evaluating a suggestion set. We show that maximizing the objective function is a submodular function maximization problem subject to n matroid constraints, which is an NP-hard problem. An greedy approximate algorithm with an approximation ratio O((Formula presented.)) is also proposed. Experimental results show that our suggestion outperforms other methods on providing relevant and diverse suggestions. © 2016 Springer Science+Business Media New York

  15. Optimal generation and reserve dispatch in a multi-area competitive market using a hybrid direct search method

    International Nuclear Information System (INIS)

    Chen, C.-L.

    2005-01-01

    With restructuring of the power industry, competitive bidding for energy and ancillary services are increasingly recognized as an important part of electricity markets. It is desirable to optimize not only the generator's bid prices for energy and for providing minimized ancillary services but also the transmission congestion costs. In this paper, a hybrid approach of combining sequential dispatch with a direct search method is developed to deal with the multi-product and multi-area electricity market dispatch problem. The hybrid direct search method (HDSM) incorporates sequential dispatch into the direct search method to facilitate economic sharing of generation and reserve across areas and to minimize the total market cost in a multi-area competitive electricity market. The effects of tie line congestion and area spinning reserve requirement are also consistently reflected in the marginal price in each area. Numerical experiments are included to understand the various constraints in the market cost analysis and to provide valuable information for market participants in a pool oriented electricity market

  16. Optimal generation and reserve dispatch in a multi-area competitive market using a hybrid direct search method

    International Nuclear Information System (INIS)

    Chun Lung Chen

    2005-01-01

    With restructuring of the power industry, competitive bidding for energy and ancillary services are increasingly recognized as an important part of electricity markets. It is desirable to optimize not only the generator's bid prices for energy and for providing minimized ancillary services but also the transmission congestion costs. In this paper, a hybrid approach of combining sequential dispatch with a direct search method is developed to deal with the multi-product and multi-area electricity market dispatch problem. The hybrid direct search method (HDSM) incorporates sequential dispatch into the direct search method to facilitate economic sharing of generation and reserve across areas and to minimize the total market cost in a multi-area competitive electricity market. The effects of tie line congestion and area spinning reserve requirement are also consistently reflected in the marginal price in each area. Numerical experiments are included to understand the various constraints in the market cost analysis and to provide valuable information for market participants in a pool oriented electricity market. (author)

  17. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    International Nuclear Information System (INIS)

    Rocha, Humberto; Dias, Joana M; Ferreira, Brígida C; Lopes, Maria C

    2013-01-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem. (paper)

  18. Buffer lining manufacturing method for radioactive waste container

    International Nuclear Information System (INIS)

    Kawakami, Susumu; Sugino, Hiroyuki

    1998-01-01

    A recessed portion is formed on an upper surface of a filler layer made of a buffer powder filled into a container main body, the upper portion of the vessel main body is closed by a shrinkable liquid tight film. It is placed in a pressurizing container and pressed to mold a buffer lining base material integrated with the vessel main body. A flat upper surface and a containing space are formed by shaving to form a buffer lining. A disposing vessel containing radioactive wastes is inserted into the containing space, and the containing space is closed by a buffer block. The upper surface is sealed by a lid. With such a constitution, since a buffer lining integrated with the vessel main body can be formed easily inside the vessel main body, the disposing vessel can be contained in the containing vessel in a state surrounded by the buffer easily and stably without laying or piling over a large quantity of buffer blocks. (T.M.)

  19. Projection methods for line radiative transfer in spherical media.

    Science.gov (United States)

    Anusha, L. S.; Nagendra, K. N.

    An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).

  20. Robust Control Methods for On-Line Statistical Learning

    Directory of Open Access Journals (Sweden)

    Capobianco Enrico

    2001-01-01

    Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.

  1. Adaptive calibration method with on-line growing complexity

    Directory of Open Access Journals (Sweden)

    Šika Z.

    2011-12-01

    Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.

  2. A review of the scientific rationale and methods used in the search for other planetary systems

    Science.gov (United States)

    Black, D. C.

    1985-01-01

    Planetary systems appear to be one of the crucial links in the chain leading from simple molecules to living systems, particularly complex (intelligent?) living systems. Although there is currently no observational proof of the existence of any planetary system other than our own, techniques are now being developed which will permit a comprehensive search for other planetary systems. The scientific rationale for and methods used in such a search effort are reviewed here.

  3. Emission-Line Galaxies from the PEARS Hubble Ultra Deep Field: A 2-D Detection Method and First Results

    Science.gov (United States)

    Gardner, J. P.; Straughn, Amber N.; Meurer, Gerhardt R.; Pirzkal, Norbert; Cohen, Seth H.; Malhotra, Sangeeta; Rhoads, james; Windhorst, Rogier A.; Gardner, Jonathan P.; Hathi, Nimish P.; hide

    2007-01-01

    The Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) grism PEARS (Probing Evolution And Reionization Spectroscopically) survey provides a large dataset of low-resolution spectra from thousands of galaxies in the GOODS North and South fields. One important subset of objects in these data are emission-line galaxies (ELGs), and we have investigated several different methods aimed at systematically selecting these galaxies. Here we present a new methodology and results of a search for these ELGs in the PEARS observations of the Hubble Ultra Deep Field (HUDF) using a 2D detection method that utilizes the observation that many emission lines originate from clumpy knots within galaxies. This 2D line-finding method proves to be useful in detecting emission lines from compact knots within galaxies that might not otherwise be detected using more traditional 1D line-finding techniques. We find in total 96 emission lines in the HUDF, originating from 81 distinct "knots" within 63 individual galaxies. We find in general that [0 1111 emitters are the most common, comprising 44% of the sample, and on average have high equivalent widths (70% of [0 1111 emitters having rest-frame EW> 100A). There are 12 galaxies with multiple emitting knots; several show evidence of variations in H-alpha flux in the knots, suggesting that the differing star formation properties across a single galaxy can in general be probed at redshifts approximately greater than 0.2 - 0.4. The most prevalent morphologies are large face-on spirals and clumpy interacting systems, many being unique detections owing to the 2D method described here, thus highlighting the strength of this technique.

  4. Low-Mode Conformational Search Method with Semiempirical Quantum Mechanical Calculations: Application to Enantioselective Organocatalysis.

    Science.gov (United States)

    Kamachi, Takashi; Yoshizawa, Kazunari

    2016-02-22

    A conformational search program for finding low-energy conformations of large noncovalent complexes has been developed. A quantitatively reliable semiempirical quantum mechanical PM6-DH+ method, which is able to accurately describe noncovalent interactions at a low computational cost, was employed in contrast to conventional conformational search programs in which molecular mechanical methods are usually adopted. Our approach is based on the low-mode method whereby an initial structure is perturbed along one of its low-mode eigenvectors to generate new conformations. This method was applied to determine the most stable conformation of transition state for enantioselective alkylation by the Maruoka and cinchona alkaloid catalysts and Hantzsch ester hydrogenation of imines by chiral phosphoric acid. Besides successfully reproducing the previously reported most stable DFT conformations, the conformational search with the semiempirical quantum mechanical calculations newly discovered a more stable conformation at a low computational cost.

  5. A cross-correlation method to search for gravitational wave bursts with AURIGA and Virgo

    NARCIS (Netherlands)

    Bignotto, M.; Bonaldi, M.; Camarda, M.; Cerdonio, M.; Conti, L.; Drago, M.; Falferi, P.; Liguori, N.; Longo, S.; Mezzena, R.; Mion, A.; Ortolan, A.; Prodi, G. A.; Re, V.; Salemi, F.; Taffarello, L.; Vedovato, G.; Vinante, A.; Vitale, S.; Zendri, J. -P.; Acernese, F.; Alshourbagy, Mohamed; Amico, Paolo; Antonucci, Federica; Aoudia, S.; Astone, P.; Avino, Saverio; Baggio, L.; Ballardin, G.; Barone, F.; Barsotti, L.; Barsuglia, M.; Bauer, Th. S.; Bigotta, Stefano; Birindelli, Simona; Boccara, Albert-Claude; Bondu, F.; Bosi, Leone; Braccini, Stefano; Bradaschia, C.; Brillet, A.; Brisson, V.; Buskulic, D.; Cagnoli, G.; Calloni, E.; Campagna, Enrico; Carbognani, F.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cesarini, E.; Chassande-Mottin, E.; Clapson, A-C; Cleva, F.; Coccia, E.; Corda, C.; Corsi, A.; Cottone, F.; Coulon, J. -P.; Cuoco, E.; D'Antonio, S.; Dari, A.; Dattilo, V.; Davier, M.; Rosa, R.; Del Prete, M.; Di Fiore, L.; Di Lieto, A.; Emilio, M. Di Paolo; Di Virgilio, A.; Evans, M.; Fafone, V.; Ferrante, I.; Fidecaro, F.; Fiori, I.; Flaminio, R.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Gammaitoni, L.; Garufi, F.; Genin, E.; Gennai, A.; Giazotto, A.; Giordano, L.; Granata, V.; Greverie, C.; Grosjean, D.; Guidi, G.; Hamdani, S.U.; Hebri, S.; Heitmann, H.; Hello, P.; Huet, D.; Kreckelbergh, S.; La Penna, P.; Laval, M.; Leroy, N.; Letendre, N.; Lopez, B.; Lorenzini, M.; Loriette, V.; Losurdo, G.; Mackowski, J. -M.; Majorana, E.; Man, C. N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marque, J.; Martelli, F.; Masserot, A.; Menzinger, F.; Milano, L.; Minenkov, Y.; Moins, C.; Moreau, J.; Morgado, N.; Mosca, S.; Mours, B.; Neri, I.; Nocera, F.; Pagliaroli, G.; Palomba, C.; Paoletti, F.; Pardi, S.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Piergiovanni, F.; Pinard, L.; Poggiani, R.; Punturo, M.; Puppo, P.; Rapagnani, P.; Regimbau, T.; Remillieux, A.; Ricci, F.; Ricciardi, I.; Rocchi, A.; Rolland, L.; Romano, R.; Ruggi, P.; Russo, G.; Solimeno, S.; Spallicci, A.; Swinkels, B. L.; Tarallo, M.; Terenzi, R.; Toncelli, A.; Tonelli, M.; Tournefier, E.; Travasso, F.; Vajente, G.; van den Brand, J. F. J.; van der Putten, S.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinet, J. -Y.; Vocca, H.; Yvert, M.

    2008-01-01

    We present a method to search for transient gravitational waves using a network of detectors with different spectral and directional sensitivities: the interferometer Virgo and the bar detector AURIGA. The data analysis method is based on the measurements of the correlated energy in the network by

  6. Dual-mode nested search method for categorical uncertain multi-objective optimization

    Science.gov (United States)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  7. Hybrid Multistarting GA-Tabu Search Method for the Placement of BtB Converters for Korean Metropolitan Ring Grid

    Directory of Open Access Journals (Sweden)

    Remund J. Labios

    2016-01-01

    Full Text Available This paper presents a method to determine the optimal locations for installing back-to-back (BtB converters in a power grid as a countermeasure to reduce fault current levels. The installation of BtB converters can be regarded as network reconfiguration. For the purpose, a hybrid multistarting GA-tabu search method was used to determine the best locations from a preselected list of candidate locations. The constraints used in determining the best locations include circuit breaker fault current limits, proximity of proposed locations, and capability of the solution to reach power flow convergence. A simple power injection model after applying line-opening on selected branches was used as a means for power flows with BtB converters. Kron reduction was also applied as a method for network reduction for fast evaluation of fault currents with a given topology. Simulations of the search method were performed on the Korean power system, particularly the Seoul metropolitan area.

  8. An ME-PC Enhanced HDMR Method for Efficient Statistical Analysis of Multiconductor Transmission Line Networks

    KAUST Repository

    Yucel, Abdulkadir C.; Bagci, Hakan; Michielssen, Eric

    2015-01-01

    An efficient method for statistically characterizing multiconductor transmission line (MTL) networks subject to a large number of manufacturing uncertainties is presented. The proposed method achieves its efficiency by leveraging a high

  9. New Internet search volume-based weighting method for integrating various environmental impacts

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-01-15

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  10. New Internet search volume-based weighting method for integrating various environmental impacts

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  11. Surfing for suicide methods and help: content analysis of websites retrieved with search engines in Austria and the United States.

    Science.gov (United States)

    Till, Benedikt; Niederkrotenthaler, Thomas

    2014-08-01

    The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P search engines generally had more protective characteristics (P search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.

  12. A Sweep-Line Method for State Space Exploration

    DEFF Research Database (Denmark)

    Christensen, Søren; Kristensen, Lars Michael; Mailund, Thomas

    2001-01-01

    generation, since these states can never be reached again. This in turn reduces the memory used for state space storage during the task of verification. Examples of progress measures are sequence numbers in communication protocols and time in certain models with time. We illustrate the application...... of the method on a number of Coloured Petri Net models, and give a first evaluation of its practicality by means of an implementation based on the Design/CPN state space tool. Our experiments show significant reductions in both space and time used during state space exploration. The method is not specific...... to Coloured Petri Nets but applicable to a wide range of modelling languages....

  13. Ultrasound line-by-line scanning method of spatial-temporal active cavitation mapping for high-intensity focused ultrasound.

    Science.gov (United States)

    Ding, Ting; Zhang, Siyuan; Fu, Quanyou; Xu, Zhian; Wan, Mingxi

    2014-01-01

    This paper presented an ultrasound line-by-line scanning method of spatial-temporal active cavitation mapping applicable in a liquid or liquid filled tissue cavities exposed by high-intensity focused ultrasound (HIFU). Scattered signals from cavitation bubbles were obtained in a scan line immediately after one HIFU exposure, and then there was a waiting time of 2 s long enough to make the liquid back to the original state. As this pattern extended, an image was built up by sequentially measuring a series of such lines. The acquisition of the beamformed radiofrequency (RF) signals for a scan line was synchronized with HIFU exposure. The duration of HIFU exposure, as well as the delay of the interrogating pulse relative to the moment while HIFU was turned off, could vary from microseconds to seconds. The feasibility of this method was demonstrated in tap-water and a tap-water filled cavity in the tissue-mimicking gelatin-agar phantom as capable of observing temporal evolutions of cavitation bubble cloud with temporal resolution of several microseconds, lateral and axial resolution of 0.50 mm and 0.29 mm respectively. The dissolution process of cavitation bubble cloud and spatial distribution affected by cavitation previously generated were also investigated. Although the application is limited by the requirement for a gassy fluid (e.g. tap water, etc.) that allows replenishment of nuclei between HIFU exposures, the technique may be a useful tool in spatial-temporal cavitation mapping for HIFU with high precision and resolution, providing a reference for clinical therapy. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  15. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  16. Calculation of large ion densities under HVdc transmission lines by the finite difference method

    International Nuclear Information System (INIS)

    Suda, Tomotaka; Sunaga, Yoshitaka

    1995-01-01

    A calculation method for large ion densities (charged aerosols) under HVdc transmission lines was developed considering both the charging mechanism of aerosols by small ions and the drifting process by wind. Large ion densities calculated by this method agreed well with the ones measured under the Shiobara HVdc test line on the lateral profiles at ground level up to about 70m downwind from the line. Measured values decreased more quickly than calculated ones farther downwind from the line. Considering the effect of point discharge from ground cover (earth corona) improved the agreement in the farther downwind region

  17. A new greedy search method for the design of digital IIR filter

    Directory of Open Access Journals (Sweden)

    Ranjit Kaur

    2015-07-01

    Full Text Available A new greedy search method is applied in this paper to design the optimal digital infinite impulse response (IIR filter. The greedy search method is based on binary successive approximation (BSA and evolutionary search (ES. The suggested greedy search method optimizes the magnitude response and the phase response simultaneously and also finds the lowest order of the filter. The order of the filter is controlled by a control gene whose value is also optimized along with the filter coefficients to obtain optimum order of designed IIR filter. The stability constraints of IIR filter are taken care of during the design procedure. To determine the trade-off relationship between conflicting objectives in the non-inferior domain, the weighting method is exploited. The proposed approach is effectively applied to solve the multiobjective optimization problems of designing the digital low-pass (LP, high-pass (HP, bandpass (BP, and bandstop (BS filters. It has been demonstrated that this technique not only fulfills all types of filter performance requirements, but also the lowest order of the filter can be found. The computational experiments show that the proposed approach gives better digital IIR filters than the existing evolutionary algorithm (EA based methods.

  18. An image overall complexity evaluation method based on LSD line detection

    Science.gov (United States)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  19. Introducing PALETTE: an iterative method for conducting a literature search for a review in palliative care.

    Science.gov (United States)

    Zwakman, Marieke; Verberne, Lisa M; Kars, Marijke C; Hooft, Lotty; van Delden, Johannes J M; Spijker, René

    2018-06-02

    In the rapidly developing specialty of palliative care, literature reviews have become increasingly important to inform and improve the field. When applying widely used methods for literature reviews developed for intervention studies onto palliative care, challenges are encountered such as the heterogeneity of palliative care in practice (wide range of domains in patient characteristics, stages of illness and stakeholders), the explorative character of review questions, and the poorly defined keywords and concepts. To overcome the challenges and to provide guidance for researchers to conduct a literature search for a review in palliative care, Palliative cAre Literature rEview iTeraTive mEthod (PALLETE), a pragmatic framework, was developed. We assessed PALETTE with a detailed description. PALETTE consists of four phases; developing the review question, building the search strategy, validating the search strategy and performing the search. The framework incorporates different information retrieval techniques: contacting experts, pearl growing, citation tracking and Boolean searching in a transparent way to maximize the retrieval of literature relevant to the topic of interest. The different components and techniques are repeated until no new articles are qualified for inclusion. The phases within PALETTE are interconnected by a recurrent process of validation on 'golden bullets' (articles that undoubtedly should be part of the review), citation tracking and concept terminology reflecting the review question. To give insight in the value of PALETTE, we compared PALETTE with the recommended search method for reviews of intervention studies. By using PALETTE on two palliative care literature reviews, we were able to improve our review questions and search strategies. Moreover, in comparison with the recommended search for intervention reviews, the number of articles needed to be screened was decreased whereas more relevant articles were retrieved. Overall, PALETTE

  20. Systems, methods and apparatus for developing and maintaining evolving systems with software product lines

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Pena, Joaquin (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which an evolutionary system is managed and viewed as a software product line. In some embodiments, the core architecture is a relatively unchanging part of the system, and each version of the system is viewed as a product from the product line. Each software product is generated from the core architecture with some agent-based additions. The result may be a multi-agent system software product line.

  1. SMM observations of gamma-ray transients. 2: A search for gamma-ray lines between 400 and 600 keV from the Crab Nebula

    Science.gov (United States)

    Harris, Michael J.; Share, Gerald H.; Leising, Mark D.

    1994-01-01

    We have search spectra obtained by the Solar Maximum Mission Gamma-Ray Spectrometer during 1981-1988 for evidence of transient gamma-ray lines from the Crab Nebula which have been reported by previous experiments at energies 400-460 keV and 539 keV. We find no evidence for significant emission in any of these lines on time scales between aproximately 1 day and approximately 1 yr. Our 3 sigma upper limits on the transient flux during 1 d intervals are approximately equal to 2.2 x 10(exp -3) photons/sq cm/s for narrow lines at any energy, and approximately equal to 2.9 x 10(exp -3) photons/sq cm/s for the 539 keV line if it is as broad as 42 keV Full Width at Half Maximum (FWHM). We also searched our data during the approximately 5 hr period on 1981 June 6 during which Owens, Myers, & Thompson (1985) reported a strong line at 405 keV. We detected no line down to a 3 upper sigma limit of 3.3 x 10(exp -3) photons/sq cm/s in disagreement with the flux 7.2 +/- 2.1 x 10(exp -3) photos/sq cm/s measured by Owens et al.

  2. An Advanced Actuator Line Method for Wind Energy Applications and Beyond: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Churchfield, Matthew; Schreck, Scott; Martinez-Tossas, Luis A.; Meneveau, Charles; Spalart, Philippe R.

    2017-03-24

    The actuator line method to represent rotor aerodynamics within computational fluid dynamics has been in use for over a decade. This method applies a body force to the flow field along rotating lines corresponding to the individual rotor blades and employs tabular airfoil data to compute the force distribution. The actuator line method is attractive because compared to blade-resolved simulations, the required mesh is much simpler and the computational cost is lower. This work proposes a higher fidelity variant of the actuator line method meant to fill the space between current actuator line and blade-resolved simulations. It contains modifications in two key areas. The first is that of freestream velocity vector estimation along the line, which is necessary to compute the lift and drag along the line using tabular airfoil data. Most current methods rely on point sampling in which the location of sampling is ambiguous. Here we test a velocity sampling method that uses a properly weighted integral over space, removing this ambiguity. The second area of improvement is the function used to project the one-dimensional actuator line force onto the three-dimensional fluid mesh as a body force. We propose and test a projection function that spreads the force over a region that looks something like a real blade with the hope that it will produce the blade local and near wake flow features with more accuracy and higher fidelity. Our goal is that between these two improvements, not only will the flow field predictions be enhanced, but also the spanwise loading will be made more accurate. We refer to this combination of improvements as the advanced actuator line method. We apply these improvements to two different wind turbine cases. Although there is a strong wind energy motivation in our work, there is no reason these advanced actuator line ideas cannot be used in other applications, such as helicopter rotors.

  3. An Advanced Actuator Line Method for Wind Energy Applications and Beyond

    Energy Technology Data Exchange (ETDEWEB)

    Churchfield, Matthew J.; Schreck, Scott; Martinez-Tossas, Luis A.; Meneveau, Charles; Spalart, Philippe R.

    2017-01-09

    The actuator line method to represent rotor aerodynamics within computational fluid dynamics has been in use for over a decade. This method applies a body force to the flow field along rotating lines corresponding to the individual rotor blades and employs tabular airfoil data to compute the force distribution. The actuator line method is attractive because compared to blade-resolved simulations, the required mesh is much simpler and the computational cost is lower. This work proposes a higher fidelity variant of the actuator line method meant to fill the space between current actuator line and blade-resolved simulations. It contains modifications in two key areas. The first is that of freestream velocity vector estimation along the line, which is necessary to compute the lift and drag along the line using tabular airfoil data. Most current methods rely on point sampling in which the location of sampling is ambiguous. Here we test a velocity sampling method that uses a properly weighted integral over space, removing this ambiguity. The second area of improvement is the function used to project the one-dimensional actuator line force onto the three-dimensional fluid mesh as a body force. We propose and test a projection function that spreads the force over a region that looks something like a real blade with the hope that it will produce the blade local and near wake flow features with more accuracy and higher fidelity. Our goal is that between these two improvements, not only will the flow field predictions be enhanced, but also the spanwise loading will be made more accurate. We refer to this combination of improvements as the advanced actuator line method. We apply these improvements to two different wind turbine cases. Although there is a strong wind energy motivation in our work, there is no reason these advanced actuator line ideas cannot be used in other applications, such as helicopter rotors.

  4. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    Science.gov (United States)

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  5. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  6. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  7. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  8. EXTRACTION OF ROOF LINES FROM HIGH-RESOLUTION IMAGES BY A GROUPING METHOD

    Directory of Open Access Journals (Sweden)

    A. P. Dal Poz

    2016-06-01

    Full Text Available This paper proposes a method for extracting groups of straight lines that represent roof boundaries and roof ridgelines from highresolution aerial images using corresponding Airborne Laser Scanner (ALS roof polyhedrons as initial approximations. The proposed method is based on two main steps. First, straight lines that are candidates to represent roof ridgelines and roof boundaries of a building are extracted from the aerial image. Second, a group of straight lines that represent roof boundaries and roof ridgelines of a selected building is obtained through the optimization of a Markov Random Field (MRF-based energy function using the genetic algorithm optimization method. The formulation of this energy function considers several attributes, such as the proximity of the extracted straight lines to the corresponding projected ALS-derived roof polyhedron and the rectangularity (extracted straight lines that intersect at nearly 90°. Experimental results are presented and discussed in this paper.

  9. Neural Activity Associated with Visual Search for Line Drawings on AAC Displays: An Exploration of the Use of fMRI.

    Science.gov (United States)

    Wilkinson, Krista M; Dennis, Nancy A; Webb, Christina E; Therrien, Mari; Stradtman, Megan; Farmer, Jacquelyn; Leach, Raevynn; Warrenfeltz, Megan; Zeuner, Courtney

    2015-01-01

    Visual aided augmentative and alternative communication (AAC) consists of books or technologies that contain visual symbols to supplement spoken language. A common observation concerning some forms of aided AAC is that message preparation can be frustratingly slow. We explored the uses of fMRI to examine the neural correlates of visual search for line drawings on AAC displays in 18 college students under two experimental conditions. Under one condition, the location of the icons remained stable and participants were able to learn the spatial layout of the display. Under the other condition, constant shuffling of the locations of the icons prevented participants from learning the layout, impeding rapid search. Brain activation was contrasted under these conditions. Rapid search in the stable display was associated with greater activation of cortical and subcortical regions associated with memory, motor learning, and dorsal visual pathways compared to the search in the unpredictable display. Rapid search for line drawings on stable AAC displays involves not just the conceptual knowledge of the symbol meaning but also the integration of motor, memory, and visual-spatial knowledge about the display layout. Further research must study individuals who use AAC, as well as the functional effect of interventions that promote knowledge about array layout.

  10. Determination of anions with an on-line capillary electrophoresis method; Anionien on-line maeaeritys kapillaarielektroforeesilla - MPKT 10

    Energy Technology Data Exchange (ETDEWEB)

    Siren, H; Saerme, T; Kotiaho, T; Hiissa, T; Savolahti, P; Komppa, V [VTT Chemical Technology, Espoo (Finland)

    1999-12-31

    The aim of the study was to set-up an on-line capillary electrophoresis method for determination of anions in process waters of pulp and paper industry with exporting the results to the process control system of the mill. The quantification is important, since it will give information about the possible causes of precipitation. In recent years, the capillary electrophoresis (CE) due to its high separation efficiency has been shown as a method to take into consideration when analyzing chemical species ranging from small inorganic anions to different macromolecules. Many compounds are not easily detected in their native state, why analysis methods must be developed to improve their detection. Especially, small inorganic and organic anions which do not have chromophores are not sensitive enough for direct-UV detection. In such analyses the anions are mostly detected with indirect-UV technique. Capillary electrophoresis instruments are used to analyze samples in off-line, which seldom represent the situation in process. Therefore, on-line instrument technology with autoanalyzing settings will be needed in quality control. The development of a fully automatic capillary electrophoresis system is underway in co-operation with KCL (The Finnish Pulp and Paper Research Institute). In our research, we have first concentrated on the determination of sulphate in waters of paper industry. The method used for detection of sulphate is based on indirect-UV detection with CE, where the background electrolyte (BGE) is an absorbing mixture of secondary amines. The whole procedure for quantification of sulphate is performed within 15 minutes, after which a new sample is analyzed automatically. The only sample pretreatment is filtration, which is necessary before analysis. The concentrations of sulphate in process waters tested were between 300 and 800 ppm. Our tests show that a simultaneous determination of chloride, sulphate, nitrate, nitrite, sulphite, carbonate and oxalate is also

  11. Determination of anions with an on-line capillary electrophoresis method; Anionien on-line maeaeritys kapillaarielektroforeesilla - MPKT 10

    Energy Technology Data Exchange (ETDEWEB)

    Siren, H.; Saerme, T.; Kotiaho, T.; Hiissa, T.; Savolahti, P.; Komppa, V. [VTT Chemical Technology, Espoo (Finland)

    1998-12-31

    The aim of the study was to set-up an on-line capillary electrophoresis method for determination of anions in process waters of pulp and paper industry with exporting the results to the process control system of the mill. The quantification is important, since it will give information about the possible causes of precipitation. In recent years, the capillary electrophoresis (CE) due to its high separation efficiency has been shown as a method to take into consideration when analyzing chemical species ranging from small inorganic anions to different macromolecules. Many compounds are not easily detected in their native state, why analysis methods must be developed to improve their detection. Especially, small inorganic and organic anions which do not have chromophores are not sensitive enough for direct-UV detection. In such analyses the anions are mostly detected with indirect-UV technique. Capillary electrophoresis instruments are used to analyze samples in off-line, which seldom represent the situation in process. Therefore, on-line instrument technology with autoanalyzing settings will be needed in quality control. The development of a fully automatic capillary electrophoresis system is underway in co-operation with KCL (The Finnish Pulp and Paper Research Institute). In our research, we have first concentrated on the determination of sulphate in waters of paper industry. The method used for detection of sulphate is based on indirect-UV detection with CE, where the background electrolyte (BGE) is an absorbing mixture of secondary amines. The whole procedure for quantification of sulphate is performed within 15 minutes, after which a new sample is analyzed automatically. The only sample pretreatment is filtration, which is necessary before analysis. The concentrations of sulphate in process waters tested were between 300 and 800 ppm. Our tests show that a simultaneous determination of chloride, sulphate, nitrate, nitrite, sulphite, carbonate and oxalate is also

  12. Mathematical programming models for solving in equal-sized facilities layout problems. A genetic search method

    International Nuclear Information System (INIS)

    Tavakkoli-Moghaddam, R.

    1999-01-01

    This paper present unequal-sized facilities layout solutions generated by a genetic search program. named Layout Design using a Genetic Algorithm) 9. The generalized quadratic assignment problem requiring pre-determined distance and material flow matrices as the input data and the continuous plane model employing a dynamic distance measure and a material flow matrix are discussed. Computational results on test problems are reported as compared with layout solutions generated by the branch - and bound algorithm a hybrid method merging simulated annealing and local search techniques, and an optimization process of an enveloped block

  13. Suitability of the line intersect method for sampling hardwood logging residues

    Science.gov (United States)

    A. Jeff Martin

    1976-01-01

    The line intersect method of sampling logging residues was tested in Appalachian hardwoods and was found to provide unbiased estimates of the volume of residue in cubic feet per acre. Thirty-two chains of sample line were established on each of sixteen 1-acre plots on cutover areas in a variety of conditions. Estimates from these samples were then compared to actual...

  14. A New Method of On-line Grid Impedance Estimation for PV Inverter

    DEFF Research Database (Denmark)

    Teodorescu, Remus; Asiminoaei, Lucian; Blaabjerg, Frede

    2004-01-01

    for on-line measuring the grid impedance is presented. The presented method requires no extra hardware being accommodated by typical PV inverters, sensors and CPU, to provide a fast and low cost approach of on-line impedance measurement. By injecting a non-characteristic harmonic current and measuring...

  15. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  16. A peak value searching method of the MCA based on digital logic devices

    International Nuclear Information System (INIS)

    Sang Ziru; Huang Shanshan; Chen Lian; Jin Ge

    2010-01-01

    Digital multi-channel analyzers play a more important role in multi-channel pulse height analysis technique. The direction of digitalization are characterized by powerful pulse processing ability, high throughput, improved stability and flexibility. This paper introduces a method of searching peak value of waveform based on digital logic with FPGA. This method reduce the dead time. Then data correction offline can improvement the non-linearity of MCA. It gives the α energy spectrum of 241 Am. (authors)

  17. Local Path Planning of Driverless Car Navigation Based on Jump Point Search Method Under Urban Environment

    Directory of Open Access Journals (Sweden)

    Kaijun Zhou

    2017-09-01

    Full Text Available The Jump Point Search (JPS algorithm is adopted for local path planning of the driverless car under urban environment, and it is a fast search method applied in path planning. Firstly, a vector Geographic Information System (GIS map, including Global Positioning System (GPS position, direction, and lane information, is built for global path planning. Secondly, the GIS map database is utilized in global path planning for the driverless car. Then, the JPS algorithm is adopted to avoid the front obstacle, and to find an optimal local path for the driverless car in the urban environment. Finally, 125 different simulation experiments in the urban environment demonstrate that JPS can search out the optimal and safety path successfully, and meanwhile, it has a lower time complexity compared with the Vector Field Histogram (VFH, the Rapidly Exploring Random Tree (RRT, A*, and the Probabilistic Roadmaps (PRM algorithms. Furthermore, JPS is validated usefully in the structured urban environment.

  18. An adaptive bin framework search method for a beta-sheet protein homopolymer model

    Directory of Open Access Journals (Sweden)

    Hoos Holger H

    2007-04-01

    Full Text Available Abstract Background The problem of protein structure prediction consists of predicting the functional or native structure of a protein given its linear sequence of amino acids. This problem has played a prominent role in the fields of biomolecular physics and algorithm design for over 50 years. Additionally, its importance increases continually as a result of an exponential growth over time in the number of known protein sequences in contrast to a linear increase in the number of determined structures. Our work focuses on the problem of searching an exponentially large space of possible conformations as efficiently as possible, with the goal of finding a global optimum with respect to a given energy function. This problem plays an important role in the analysis of systems with complex search landscapes, and particularly in the context of ab initio protein structure prediction. Results In this work, we introduce a novel approach for solving this conformation search problem based on the use of a bin framework for adaptively storing and retrieving promising locally optimal solutions. Our approach provides a rich and general framework within which a broad range of adaptive or reactive search strategies can be realized. Here, we introduce adaptive mechanisms for choosing which conformations should be stored, based on the set of conformations already stored in memory, and for biasing choices when retrieving conformations from memory in order to overcome search stagnation. Conclusion We show that our bin framework combined with a widely used optimization method, Monte Carlo search, achieves significantly better performance than state-of-the-art generalized ensemble methods for a well-known protein-like homopolymer model on the face-centered cubic lattice.

  19. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    Science.gov (United States)

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  20. Efficiency estimation method of three-wired AC to DC line transfer

    Science.gov (United States)

    Solovev, S. V.; Bardanov, A. I.

    2018-05-01

    The development of power semiconductor converters technology expands the scope of their application to medium voltage distribution networks (6-35 kV). Particularly rectifiers and inverters of appropriate power capacity complement the topology of such voltage level networks with the DC links and lines. The article presents a coefficient that allows taking into account the increase of transmission line capacity depending on the parameters of it. The application of the coefficient is presented by the example of transfer three-wired AC line to DC in various methods. Dependences of the change in the capacity from the load power factor of the line and the reactive component of the resistance of the transmission line are obtained. Conclusions are drawn about the most efficient ways of converting a three-wired AC line to direct current.

  1. The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability

    Science.gov (United States)

    Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing

    2018-01-01

    Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.

  2. Security warning method and system for worker safety during live-line working

    Science.gov (United States)

    Jiang, Chilong; Zou, Dehua; Long, Chenhai; Yang, Miao; Zhang, Zhanlong; Mei, Daojun

    2017-09-01

    Live-line working is an essential part in the operations in an electric power system. Live-line workers are required to wear shielding clothing. Shielding clothing, however, acts as a closed environment for the human body. Working in a closed environment for a long time can change the physiological responses of the body and even endanger personal safety. According to the typical conditions of live-line working, this study synthesizes environmental factors related to shielding clothing and the physiological factors of the body to establish the heart rate variability index RMSSD and the comprehensive security warning index SWI. On the basis of both indices, this paper proposes a security warning method and system for the safety live-line workers. The system can monitor the real-time status of workers during live-line working to provide security warning and facilitate the effective safety supervision by the live operation center during actual live-line working.

  3. Engineering Mathematical Analysis Method for Productivity Rate in Linear Arrangement Serial Structure Automated Flow Assembly Line

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2015-01-01

    Full Text Available Productivity rate (Q or production rate is one of the important indicator criteria for industrial engineer to improve the system and finish good output in production or assembly line. Mathematical and statistical analysis method is required to be applied for productivity rate in industry visual overviews of the failure factors and further improvement within the production line especially for automated flow line since it is complicated. Mathematical model of productivity rate in linear arrangement serial structure automated flow line with different failure rate and bottleneck machining time parameters becomes the basic model for this productivity analysis. This paper presents the engineering mathematical analysis method which is applied in an automotive company which possesses automated flow assembly line in final assembly line to produce motorcycle in Malaysia. DCAS engineering and mathematical analysis method that consists of four stages known as data collection, calculation and comparison, analysis, and sustainable improvement is used to analyze productivity in automated flow assembly line based on particular mathematical model. Variety of failure rate that causes loss of productivity and bottleneck machining time is shown specifically in mathematic figure and presents the sustainable solution for productivity improvement for this final assembly automated flow line.

  4. Risk Analysis Method Based on FMEA for Transmission Line in Lightning Hazards

    Directory of Open Access Journals (Sweden)

    You-Yuan WANG

    2014-05-01

    Full Text Available Failure rate of transmission line and reliability of power system are significantly affected by Lightning meteorological factor. In view of the complexity and variability of Lightning meteorological factors, this paper presents lightning trip-out rate model of transmission line in considering distribution of ground flash density and lightning day hours. Meanwhile, presents a failure rate model of transmission line in different condition, and a risk analysis method for transmission line considering multiple risk factors based on risk quantification. This method takes Lightning meteorological factor as the main evaluation standard, and establishes risk degree evaluation system for transmission line including another five evaluation standard. Put forward the risk indicators by quantify the risk factors based on experience date of transmission line in service. Based on the risk indexes comprehensive evaluation is conducted, and the evaluation result closer to practice is achieved, providing basis for transmission line risk warning and maintenance strategy. Through the risk analysis for 220 kV transmission line in a certain power supply bureau, the effectiveness of the proposed method is validated.

  5. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    Science.gov (United States)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  6. A novel method to alleviate flash-line defects in coining process

    KAUST Repository

    Xu, Jiangping; Khan, Kamran; El Sayed, Tamer S.

    2013-01-01

    with the reported experimental results. We also propose a novel method of redesigning the rim geometry of the workpiece to alleviate the flash-line defects which also shows good agreement with experiments. © 2012 Elsevier Inc. All rights reserved.

  7. A fast tomographic method for searching the minimum free energy path

    International Nuclear Information System (INIS)

    Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

    2014-01-01

    Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP

  8. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  9. Non-Viral Transfection Methods Optimized for Gene Delivery to a Lung Cancer Cell Line

    OpenAIRE

    Salimzadeh, Loghman; Jaberipour, Mansooreh; Hosseini, Ahmad; Ghaderi, Abbas

    2013-01-01

    Background Mehr-80 is a newly established adherent human large cell lung cancer cell line that has not been transfected until now. This study aims to define the optimal transfection conditions and effects of some critical elements for enhancing gene delivery to this cell line by utilizing different non-viral transfection Procedures. Methods In the current study, calcium phosphate (CaP), DEAE-dextran, superfect, electroporation and lipofection transfection methods were used to optimize deliver...

  10. Method of selecting optimum cross arm lengths for a 750 kV transmission line

    Energy Technology Data Exchange (ETDEWEB)

    Aleksandrov, G N; Olorokov, V P

    1965-01-01

    A method is presented, based on both technical and economic considerations, for selecting cross arm lengths for intermediate poles of power transmission lines according to the effects of internal overvoltage, methods from probability theory and mathematical statistics employed. The problem of optimum pole size is considered in terms of the effect of internal overvoltages for a prescribed maximum level of 2.1 PU currently used in the USSR for the design of 750 kV lines.

  11. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  12. Integrated QSAR study for inhibitors of Hedgehog Signal Pathway against multiple cell lines:a collaborative filtering method.

    Science.gov (United States)

    Gao, Jun; Che, Dongsheng; Zheng, Vincent W; Zhu, Ruixin; Liu, Qi

    2012-07-31

    The Hedgehog Signaling Pathway is one of signaling pathways that are very important to embryonic development. The participation of inhibitors in the Hedgehog Signal Pathway can control cell growth and death, and searching novel inhibitors to the functioning of the pathway are in a great demand. As the matter of fact, effective inhibitors could provide efficient therapies for a wide range of malignancies, and targeting such pathway in cells represents a promising new paradigm for cell growth and death control. Current research mainly focuses on the syntheses of the inhibitors of cyclopamine derivatives, which bind specifically to the Smo protein, and can be used for cancer therapy. While quantitatively structure-activity relationship (QSAR) studies have been performed for these compounds among different cell lines, none of them have achieved acceptable results in the prediction of activity values of new compounds. In this study, we proposed a novel collaborative QSAR model for inhibitors of the Hedgehog Signaling Pathway by integration the information from multiple cell lines. Such a model is expected to substantially improve the QSAR ability from single cell lines, and provide useful clues in developing clinically effective inhibitors and modifications of parent lead compounds for target on the Hedgehog Signaling Pathway. In this study, we have presented: (1) a collaborative QSAR model, which is used to integrate information among multiple cell lines to boost the QSAR results, rather than only a single cell line QSAR modeling. Our experiments have shown that the performance of our model is significantly better than single cell line QSAR methods; and (2) an efficient feature selection strategy under such collaborative environment, which can derive the commonly important features related to the entire given cell lines, while simultaneously showing their specific contributions to a specific cell-line. Based on feature selection results, we have proposed several

  13. Fast optimization of binary clusters using a novel dynamic lattice searching method

    International Nuclear Information System (INIS)

    Wu, Xia; Cheng, Wen

    2014-01-01

    Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd) 79 clusters with DFT-fit parameters of Gupta potential

  14. A METEOROLOGICAL RISK ASSESSMENT METHOD FOR POWER LINES BASED ON GIS AND MULTI-SENSOR INTEGRATION

    Directory of Open Access Journals (Sweden)

    Z. Lin

    2016-06-01

    Full Text Available Power lines, exposed in the natural environment, are vulnerable to various kinds of meteorological factors. Traditional research mainly deals with the influence of a single meteorological condition on the power line, which lacks of comprehensive effects evaluation and analysis of the multiple meteorological factors. In this paper, we use multiple meteorological monitoring data obtained by multi-sensors to implement the meteorological risk assessment and early warning of power lines. Firstly, we generate meteorological raster map from discrete meteorological monitoring data using spatial interpolation. Secondly, the expert scoring based analytic hierarchy process is used to compute the power line risk index of all kinds of meteorological conditions and establish the mathematical model of meteorological risk. By adopting this model in raster calculator of ArcGIS, we will have a raster map showing overall meteorological risks for power line. Finally, by overlaying the power line buffer layer to that raster map, we will get to know the exact risk index around a certain part of power line, which will provide significant guidance for power line risk management. In the experiment, based on five kinds of observation data gathered from meteorological stations in Guizhou Province of China, including wind, lightning, rain, ice, temperature, we carry on the meteorological risk analysis for the real power lines, and experimental results have proved the feasibility and validity of our proposed method.

  15. A Novel Fault Line Selection Method Based on Improved Oscillator System of Power Distribution Network

    Directory of Open Access Journals (Sweden)

    Xiaowei Wang

    2014-01-01

    Full Text Available A novel method of fault line selection based on IOS is presented. Firstly, the IOS is established by using math model, which adopted TZSC signal to replace built-in signal of duffing chaotic oscillator by selecting appropriate parameters. Then, each line’s TZSC decomposed by db10 wavelet packet to get CFB with the maximum energy principle, and CFB was solved by IOS. Finally, maximum chaotic distance and average chaotic distance on the phase trajectory are used to judge fault line. Simulation results show that the proposed method can accurately judge fault line and healthy line in strong noisy background. Besides, the nondetection zones of proposed method are elaborated.

  16. Search for brown dwarfs by gravitational microlensing effect with the pixels method. Analysis of AGAPE and EROS collaborations data

    International Nuclear Information System (INIS)

    Melchior, Anne-Laure

    1995-01-01

    This work is involved in baryonic dark matter search in galactic halos. An important collect of observational data has been initiated to test the hypothesis that this dark mass is made of compact objects such as brown dwarfs or small mass stars. The gravitational microlensing effect allows to probe this distribution of this mass type along the line of sight of nearby galaxies such as the Large Magellanic Cloud. A new way to detect these microlensing events has been proposed by P. Baillon et al.: the pixel method. The aim is to detect the amplification of stars which are unresolved or too faint to be seen by classical analysis. First, we present this method and the simulations which allow to establish its feasibility. Then, we describe the pixel analysis of the 91-92 EROS data on the Large Magellanic Cloud. The selection of luminosity variations with a shape compatible with microlensing events allows us to study the sensitivity of this analysis. We see how these results allow us to validate the pixel method applied on a large volume of data. This also shows the possibility to find luminosity variations which escape classical analysis research. Strengthened by these results, we finally describe the analysis of the AGAPE 94 data on the Andromeda galaxy which uses the same pixel method. Being ten times farther away than the Large Magellanic Cloud, the Andromeda galaxy has very few resolved stars, making the pixel method the only way of looking for microlensing events. (author) [fr

  17. Non-contact method of search and analysis of pulsating vessels

    Science.gov (United States)

    Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.

    2018-04-01

    Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.

  18. An Efficient Method to Search Real-Time Bulk Data for an Information Processing System

    International Nuclear Information System (INIS)

    Kim, Seong Jin; Kim, Jong Myung; Suh, Yong Suk; Keum, Jong Yong; Park, Heui Youn

    2005-01-01

    The Man Machine Interface System (MMIS) of System-integrated Modular Advanced ReacTor (SMART) is designed with fully digitalized features. The Information Processing System (IPS) of the MMIS acquires and processes plant data from other systems. In addition, the IPS provides plant operation information to operators in the control room. The IPS is required to process bulky data in a real-time. So, it is necessary to consider a special processing method with regards to flexibility and performance because more than a few thousands of Plant Information converges on the IPS. Among other things, the processing time for searching for data from the bulk data consumes much more than other the processing times. Thus, this paper explores an efficient method for the search and examines its feasibility

  19. Neural Based Tabu Search method for solving unit commitment problem with cooling-banking constraints

    Directory of Open Access Journals (Sweden)

    Rajan Asir Christober Gnanakkan Charles

    2009-01-01

    Full Text Available This paper presents a new approach to solve short-term unit commitment problem (UCP using Neural Based Tabu Search (NBTS with cooling and banking constraints. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for next H hours. A 7-unit utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 10, 26 and 34 units. Numerical results are shown to compare the superiority of the cost solutions obtained using the Tabu Search (TS method, Dynamic Programming (DP and Lagrangian Relaxation (LR methods in reaching proper unit commitment.

  20. Study of the hydration of globular proteins by broad NMR lines method

    Energy Technology Data Exchange (ETDEWEB)

    Blicharska, B [Uniwersytet Jagiellonski, Krakow (Poland). Instytut Fizyki

    1973-01-01

    Spectra of proteins and polypeptides obtained by means of a NMR broad line spectrometer consist of broad and thin lines. These broad and thin lines are attributed to proteins and to water absorbed on the surfaces of proteins respectively. The behaviour of the thin line in the spectra of lyophilizated albumin of the egg white has been studied in the temperature range from -42 to 20/sup 0/C. The amount of water has been found by the simple method of weighing and has been equal about 7% of the total weight. It has been found that the water absorbed on the surface of the lyophilizated proteins gives a thinner line in comparison to the water absorbed on molecules of proteins in water solutions and that the correlation time is about 10/sup 3/ times greater.

  1. Engineering Bacteria to Search for Specific Concentrations of Molecules by a Systematic Synthetic Biology Design Method.

    Science.gov (United States)

    Tien, Shin-Ming; Hsu, Chih-Yuan; Chen, Bor-Sen

    2016-01-01

    Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.

  2. Engineering Bacteria to Search for Specific Concentrations of Molecules by a Systematic Synthetic Biology Design Method.

    Directory of Open Access Journals (Sweden)

    Shin-Ming Tien

    Full Text Available Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.

  3. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    Science.gov (United States)

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the

  4. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  5. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  6. An adaptive large neighborhood search heuristic for the pickup and delivery problem with time Windows and scheduled lines

    NARCIS (Netherlands)

    Ghilas, V.; Demir, E.; van Woensel, T.

    2016-01-01

    The Pickup and Delivery Problem with Time Windows and Scheduled Lines (PDPTW-SL) concerns scheduling a set of vehicles to serve freight requests such that a part of the journey can be carried out on a scheduled public transportation line. Due to the complexity of the problem, which is NP-hard, we

  7. Ultrafast method of calculating the dynamic spectral line shapes for integrated modelling of plasmas

    International Nuclear Information System (INIS)

    Lisitsa, V.S.

    2009-01-01

    An ultrafast code for spectral line shape calculations is presented to be used in the integrated modelling of plasmas. The code is based on the close analogy between two mechanisms: (i) Dicke narrowing of the Doppler-broadened spectral lines and (ii) transition from static to impact regime in the Stark broadening. The analogy makes it possible to describe the dynamic Stark broadening in terms of an analytical functional of the static line shape. A comparison of new method with the widely used Frequency Fluctuating Method (FFM) developed by the Marseille University group (B. Talin, R. Stamm, et al.) shows good agreement, with the new method being faster than the standard FFM by nearly two orders of magnitude. The method proposed may significantly simplify the radiation transport modeling and opens new possibilities for integrated modeling of the edge and divertor plasma in tokamaks. (author)

  8. On-line monitoring of water amount in fresh concrete by radioactive-wave method

    International Nuclear Information System (INIS)

    Kemi, T.; Arai, M.; Enomoto, S.; Suzki, K.; Kumahara, Y.

    2003-01-01

    The committee on nondestructive inspection for steel reinforced concrete structures in the Federation of Construction Materials Industries, Japan has published a proposed standard for on-line monitoring of water amount in fresh concrete by the radioactive wave method. By applying a neutron technique, water amount in fresh concrete is estimated continuously from the energy consumption of neutron due to hydrogen. A standard is discussed along with results of verification tests. Thus, on-line monitoring for water amount is proposed

  9. Integrated Circuit Interconnect Lines on Lossy Silicon Substrate with Finite Element Method

    OpenAIRE

    Sarhan M. Musa,; Matthew N. O. Sadiku

    2014-01-01

    The silicon substrate has a significant effect on the inductance parameter of a lossy interconnect line on integrated circuit. It is essential to take this into account in determining the transmission line electrical parameters. In this paper, a new quasi-TEM capacitance and inductance analysis of multiconductor multilayer interconnects is successfully demonstrated using finite element method (FEM). We specifically illustrate the electrostatic modeling of single and coupled in...

  10. An efficient search method for finding the critical slip surface using the compositional Monte Carlo technique

    International Nuclear Information System (INIS)

    Goshtasbi, K.; Ahmadi, M; Naeimi, Y.

    2008-01-01

    Locating the critical slip surface and the associated minimum factor of safety are two complementary parts in a slope stability analysis. A large number of computer programs exist to solve slope stability problems. Most of these programs, however, have used inefficient and unreliable search procedures to locate the global minimum factor of safety. This paper presents an efficient and reliable method to determine the global minimum factor of safety coupled with a modified version of the Monte Carlo technique. Examples arc presented to illustrate the reliability of the proposed method

  11. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    Science.gov (United States)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  12. A novel RNA sequencing data analysis method for cell line authentication.

    Directory of Open Access Journals (Sweden)

    Erik Fasterius

    Full Text Available We have developed a novel analysis method that can interrogate the authenticity of biological samples used for generation of transcriptome profiles in public data repositories. The method uses RNA sequencing information to reveal mutations in expressed transcripts and subsequently confirms the identity of analysed cells by comparison with publicly available cell-specific mutational profiles. Cell lines constitute key model systems widely used within cancer research, but their identity needs to be confirmed in order to minimise the influence of cell contaminations and genetic drift on the analysis. Using both public and novel data, we demonstrate the use of RNA-sequencing data analysis for cell line authentication by examining the validity of COLO205, DLD1, HCT15, HCT116, HKE3, HT29 and RKO colorectal cancer cell lines. We successfully authenticate the studied cell lines and validate previous reports indicating that DLD1 and HCT15 are synonymous. We also show that the analysed HKE3 cells harbour an unexpected KRAS-G13D mutation and confirm that this cell line is a genuine KRAS dosage mutant, rather than a true isogenic derivative of HCT116 expressing only the wild type KRAS. This authentication method could be used to revisit the numerous cell line based RNA sequencing experiments available in public data repositories, analyse new experiments where whole genome sequencing is not available, as well as facilitate comparisons of data from different experiments, platforms and laboratories.

  13. Detection of the power lines in UAV remote sensed images using spectral-spatial methods.

    Science.gov (United States)

    Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham

    2018-01-15

    In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. 3D scan line method for identifying void fabric of granular materials

    Directory of Open Access Journals (Sweden)

    Theocharis Alexandros I.

    2017-01-01

    Full Text Available Among other processes measuring the void phase of porous or fractured media, scan line approach is a simplified “graphical” method, mainly used in image processing related procedures. In soil mechanics, the application of scan line method is related to the soil fabric, which is important in characterizing the anisotropic mechanical response of soils. Void fabric is of particular interest, since graphical approaches are well defined experimentally and most of them can also be easily used in numerical experiments, like the scan line method. This is in contrast to the definition of fabric based on contact normal vectors that are extremely difficult to determine, especially considering physical experiments. The scan line method has been proposed by Oda et al [1] and implemented again by Ghedia and O’Sullivan [2]. A modified method based on DEM analysis instead of image measurements of fabric has been previously proposed and implemented by the authors in a 2D scheme [3-4]. In this work, a 3D extension of the modified scan line definition is presented using PFC 3D®. The results show clearly similar trends with the 2D case and the same behaviour of fabric anisotropy is presented.

  15. Searching in the Context of a Task: A Review of Methods and Tools

    Directory of Open Access Journals (Sweden)

    Ana Maguitman

    2018-04-01

    Full Text Available Contextual information extracted from the user task can help to better target retrieval to task-relevant content. In particular, topical context can be exploited to identify the subject of the information needs, contributing to reduce the information overload problem. A great number of methods exist to extract raw context data and contextual interaction patterns from the user task and to model this information using higher-level representations. Context can then be used as a source for automatic query generation, or as a means to refine or disambiguate user-generated queries. It can also be used to filter and rank results as well as to select domain-specific search engines with better capabilities to satisfy specific information requests. This article reviews methods that have been applied to deal with the problem of reflecting the current and long-term interests of a user in the search process. It discusses major difficulties encountered in the research area of context-based information retrieval and presents an overview of tools proposed since the mid-nineties to deal with the problem of context-based search.

  16. Search method for long-duration gravitational-wave transients from neutron stars

    International Nuclear Information System (INIS)

    Prix, R.; Giampanis, S.; Messenger, C.

    2011-01-01

    We introduce a search method for a new class of gravitational-wave signals, namely, long-duration O(hours-weeks) transients from spinning neutron stars. We discuss the astrophysical motivation from glitch relaxation models and we derive a rough estimate for the maximal expected signal strength based on the superfluid excess rotational energy. The transient signal model considered here extends the traditional class of infinite-duration continuous-wave signals by a finite start-time and duration. We derive a multidetector Bayes factor for these signals in Gaussian noise using F-statistic amplitude priors, which simplifies the detection statistic and allows for an efficient implementation. We consider both a fully coherent statistic, which is computationally limited to directed searches for known pulsars, and a cheaper semicoherent variant, suitable for wide parameter-space searches for transients from unknown neutron stars. We have tested our method by Monte-Carlo simulation, and we find that it outperforms orthodox maximum-likelihood approaches both in sensitivity and in parameter-estimation quality.

  17. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis.

    Science.gov (United States)

    Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane

    2018-05-01

    This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Utilizing mixed methods research in analyzing Iranian researchers’ informarion search behaviour in the Web and presenting current pattern

    Directory of Open Access Journals (Sweden)

    Maryam Asadi

    2015-12-01

    Full Text Available Using mixed methods research design, the current study has analyzed Iranian researchers’ information searching behaviour on the Web.Then based on extracted concepts, the model of their information searching behavior was revealed. . Forty-four participants, including academic staff from universities and research centers were recruited for this study selected by purposive sampling. Data were gathered from questionnairs including ten questions and semi-structured interview. Each participant’s memos were analyzed using grounded theory methods adapted from Strauss & Corbin (1998. Results showed that the main objectives of subjects were doing a research, writing a paper, studying, doing assignments, downloading files and acquiring public information in using Web. The most important of learning about how to search and retrieve information were trial and error and get help from friends among the subjects. Information resources are identified by searching in information resources (e.g. search engines, references in papers, and search in Online database… communications facilities & tools (e.g. contact with colleagues, seminars & workshops, social networking..., and information services (e.g. RSS, Alerting, and SDI. Also, Findings indicated that searching by search engines, reviewing references, searching in online databases, and contact with colleagues and studying last issue of the electronic journals were the most important for searching. The most important strategies were using search engines and scientific tools such as Google Scholar. In addition, utilizing from simple (Quick search method was the most common among subjects. Using of topic, keywords, title of paper were most important of elements for retrieval information. Analysis of interview showed that there were nine stages in researchers’ information searching behaviour: topic selection, initiating search, formulating search query, information retrieval, access to information

  19. (Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop

    CERN Document Server

    The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...

  20. Electricity price forecast using Combinatorial Neural Network trained by a new stochastic search method

    International Nuclear Information System (INIS)

    Abedinia, O.; Amjady, N.; Shafie-khah, M.; Catalão, J.P.S.

    2015-01-01

    Highlights: • Presenting a Combinatorial Neural Network. • Suggesting a new stochastic search method. • Adapting the suggested method as a training mechanism. • Proposing a new forecast strategy. • Testing the proposed strategy on real-world electricity markets. - Abstract: Electricity price forecast is key information for successful operation of electricity market participants. However, the time series of electricity price has nonlinear, non-stationary and volatile behaviour and so its forecast method should have high learning capability to extract the complex input/output mapping function of electricity price. In this paper, a Combinatorial Neural Network (CNN) based forecasting engine is proposed to predict the future values of price data. The CNN-based forecasting engine is equipped with a new training mechanism for optimizing the weights of the CNN. This training mechanism is based on an efficient stochastic search method, which is a modified version of chemical reaction optimization algorithm, giving high learning ability to the CNN. The proposed price forecast strategy is tested on the real-world electricity markets of Pennsylvania–New Jersey–Maryland (PJM) and mainland Spain and its obtained results are extensively compared with the results obtained from several other forecast methods. These comparisons illustrate effectiveness of the proposed strategy.

  1. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  2. Search method optimization technique for thermal design of high power RFQ structure

    International Nuclear Information System (INIS)

    Sharma, N.K.; Joshi, S.C.

    2009-01-01

    RRCAT has taken up the development of 3 MeV RFQ structure for the low energy part of 100 MeV H - ion injector linac. RFQ is a precision machined resonating structure designed for high rf duty factor. RFQ structural stability during high rf power operation is an important design issue. The thermal analysis of RFQ has been performed using ANSYS finite element analysis software and optimization of various parameters is attempted using Search Method optimization technique. It is an effective optimization technique for the systems governed by a large number of independent variables. The method involves examining a number of combinations of values of independent variables and drawing conclusions from the magnitude of the objective function at these combinations. In these methods there is a continuous improvement in the objective function throughout the course of the search and hence these methods are very efficient. The method has been employed in optimization of various parameters (called independent variables) of RFQ like cooling water flow rate, cooling water inlet temperatures, cavity thickness etc. involved in RFQ thermal design. The temperature rise within RFQ structure is the objective function during the thermal design. Using ANSYS Programming Development Language (APDL), various multiple iterative programmes are written and the analysis are performed to minimize the objective function. The dependency of the objective function on various independent variables is established and the optimum values of the parameters are evaluated. The results of the analysis are presented in the paper. (author)

  3. Sliding surface searching method for slopes containing a potential weak structural surface

    Directory of Open Access Journals (Sweden)

    Aijun Yao

    2014-06-01

    Full Text Available Weak structural surface is one of the key factors controlling the stability of slopes. The stability of rock slopes is in general concerned with set of discontinuities. However, in soft rocks, failure can occur along surfaces approaching to a circular failure surface. To better understand the position of potential sliding surface, a new method called simplex-finite stochastic tracking method is proposed. This method basically divides sliding surface into two parts: one is described by smooth curve obtained by random searching, the other one is polyline formed by the weak structural surface. Single or multiple sliding surfaces can be considered, and consequently several types of combined sliding surfaces can be simulated. The paper will adopt the arc-polyline to simulate potential sliding surface and analyze the searching process of sliding surface. Accordingly, software for slope stability analysis using this method was developed and applied in real cases. The results show that, using simplex-finite stochastic tracking method, it is possible to locate the position of a potential sliding surface in the slope.

  4. Searching for rigour in the reporting of mixed methods population health research: a methodological review.

    Science.gov (United States)

    Brown, K M; Elliott, S J; Leatherdale, S T; Robertson-Wilson, J

    2015-12-01

    The environments in which population health interventions occur shape both their implementation and outcomes. Hence, when evaluating these interventions, we must explore both intervention content and context. Mixed methods (integrating quantitative and qualitative methods) provide this opportunity. However, although criteria exist for establishing rigour in quantitative and qualitative research, there is poor consensus regarding rigour in mixed methods. Using the empirical example of school-based obesity interventions, this methodological review examined how mixed methods have been used and reported, and how rigour has been addressed. Twenty-three peer-reviewed mixed methods studies were identified through a systematic search of five databases and appraised using the guidelines for Good Reporting of a Mixed Methods Study. In general, more detailed description of data collection and analysis, integration, inferences and justifying the use of mixed methods is needed. Additionally, improved reporting of methodological rigour is required. This review calls for increased discussion of practical techniques for establishing rigour in mixed methods research, beyond those for quantitative and qualitative criteria individually. A guide for reporting mixed methods research in population health should be developed to improve the reporting quality of mixed methods studies. Through improved reporting, mixed methods can provide strong evidence to inform policy and practice. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  5. Preferred Methods of Learning for Nursing Students in an On-Line Degree Program.

    Science.gov (United States)

    Hampton, Debra; Pearce, Patricia F; Moser, Debra K

    Investigators have demonstrated that on-line courses result in effective learning outcomes, but limited information has been published related to preferred teaching strategies. Delivery of on-line courses requires various teaching methods to facilitate interaction between students, content, and technology. The purposes of this study were to understand student teaching/learning preferences in on-line courses to include (a) differences in preferred teaching/learning methods for on-line nursing students across generations and (b) which teaching strategies students found to be most engaging and effective. Participants were recruited from 2 accredited, private school nursing programs (N=944) that admit students from across the United States and deliver courses on-line. Participants provided implied consent, and 217 (23%) students completed the on-line survey. Thirty-two percent of the students were from the Baby Boomer generation (1946-1964), 48% from Generation X (1965-1980), and 20% from the Millennial Generation (born after 1980). The preferred teaching/learning methods for students were videos or narrated PowerPoint presentations, followed by synchronous Adobe Connect educations sessions, assigned journal article reading, and e-mail dialog with the instructor. The top 2 methods identified by participants as the most energizing/engaging and most effective for learning were videos or narrated PowerPoint presentations and case studies. The teaching/learning method least preferred by participants and that was the least energizing/engaging was group collaborative projects with other students; the method that was the least effective for learning was wikis. Baby Boomers and Generation X participants had a significantly greater preference for discussion board (PBaby Boomer and Generation X students and rated on-line games as significantly more energizing/engaging and more effective for learning (PBaby Boomer and Generation X students. In conclusion, the results of this

  6. Determination of the ultimate load in concrete slabs by the yield line finite element method

    International Nuclear Information System (INIS)

    Vaz, L.E.; Feijo, B.; Martha, L.F.R.; Lopes, M.M.

    1984-01-01

    A method for calculating the ultimate load in reinforced concrete slabs is proposed. The method follows the finite element aproach representating the continuum slab as an assembly of rigid triangular plates connected along their sides through yield line elements. This approach leads to the definition of the displacement configuration of the plate only as a function of the transversal displacement at the nodes of the mesh (1 DOF per node) reducing significantly the number of DOF's in relation to the conventional formulation by means of the finite element method (minimum of 3 DOF per node). Nonlinear behaviour of the reinforced concrete section is considered in the definition of the moment rotation curve of the yield lines. The effect of the in plane forces acting in the middle surface of the plate is also taken into account. The validity of the model is verified comparing the numerical solutions with the results of the classical yield line theory. (Author) [pt

  7. SOLVING OPTIMAL ASSEMBLY LINE CONFIGURATION TASK BY MULTIOBJECTIVE DECISION MAKING METHODS

    Directory of Open Access Journals (Sweden)

    Ján ČABALA

    2017-06-01

    Full Text Available This paper deals with looking for the optimal configuration of automated assembly line model placed within Department of Cybernetics and Artificial Intelligence (DCAI. In order to solve this problem, Stateflow model of each configuration was created to simulate the behaviour of particular assembly line configuration. Outputs from these models were used as inputs into the multiobjective decision making process. Multi-objective decision-making methods were subsequently used to find the optimal configuration of assembly line. Paper describes the whole process of solving this task, from building the models to choosing the best configuration. Specifically, the problem was resolved using the experts’ evaluation method for evaluating the weights of every decision-making criterion, while the ELECTRE III, TOPSIS and AGREPREF methods were used for ordering the possible solutions from the most to the least suitable alternative. Obtained results were compared and final solution of this multi-objective decisionmaking problem is chosen.

  8. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    Science.gov (United States)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  9. Metrological characterization methods for confocal chromatic line sensors and optical topography sensors

    Science.gov (United States)

    Seppä, Jeremias; Niemelä, Karri; Lassila, Antti

    2018-05-01

    The increasing use of chromatic confocal technology for, e.g. fast, in-line optical topography, and measuring thickness, roughness and profiles implies a need for the characterization of various aspects of the sensors. Single-point, line and matrix versions of chromatic confocal technology, encoding depth information into wavelength, have been developed. Of these, line sensors are particularly suitable for in-line process measurement. Metrological characterization and development of practical methods for calibration and checking is needed for new optical methods and devices. Compared to, e.g. tactile methods, optical topography measurement techniques have limitations related to light wavelength and coherence, optical properties of the sample including reflectivity, specularity, roughness and colour, and definition of optical versus mechanical surfaces. In this work, metrological characterization methods for optical line sensors were developed for scale magnification and linearity, sensitivity to sample properties, and dynamic characteristics. An accurate depth scale calibration method using a single prototype groove depth sample was developed for a line sensor and validated with laser-interferometric sample tracking, attaining (sub)micrometre level or better than 0.1% scale accuracy. Furthermore, the effect of different surfaces and materials on the measurement and depth scale was studied, in particular slope angle, specularity and colour. In addition, dynamic performance, noise, lateral scale and resolution were measured using the developed methods. In the case of the LCI1200 sensor used in this study, which has a 11.3 mm  ×  2.8 mm measurement range, the instrument depth scale was found to depend only minimally on sample colour, whereas measuring steeply sloped specular surfaces in the peripheral measurement area, in the worst case, caused a somewhat larger relative sample-dependent change (1%) in scale.

  10. EXAMINATION OF THE VISUAL ACUITY ON THE LCD OPTOTYPE WITH WHOLE-LINE AND INTERPOLATION METHOD

    OpenAIRE

    Zajíček Tomáš; Veselý Petr; Veselý Petr; Synek Svatopluk; Synek Svatopluk

    2012-01-01

    The goal of this work is to show the possibility of us using the LCD optotype in common optometrist practice. Furthermore two commonly used methods for measuring visual acuity will be compared. 69 respondents were used for the measurements. The respondents were divided into two groups according to the measured LCD optotype. The visual acuity was measured using the whole-line method on modified Snellen charts as well as the interpolation method on ETDRS charts. Measurements were taken on the S...

  11. A comparison of methods for gravitational wave burst searches from LIGO and Virgo

    International Nuclear Information System (INIS)

    Beauville, F; Buskulic, D; Grosjean, D; Bizouard, M-A; Cavalier, F; Clapson, A-C; Hello, P; Blackburn, L; Katsavounidis, E; Bosi, L; Brocco, L; Brown, D A; Chatterji, S; Christensen, N; Knight, M; Fairhurst, S; Guidi, G; Heng, S; Hewitson, M; Klimenko, S

    2008-01-01

    The search procedure for burst gravitational waves has been studied using 24 h of simulated data in a network of three interferometers (Hanford 4 km, Livingston 4 km and Virgo 3 km are the example interferometers). Several methods to detect burst events developed in the LIGO Scientific Collaboration (LSC) and Virgo Collaboration have been studied and compared. We have performed coincidence analysis of the triggers obtained in the different interferometers with and without simulated signals added to the data. The benefits of having multiple interferometers of similar sensitivity are demonstrated by comparing the detection performance of the joint coincidence analysis with LSC and Virgo only burst searches. Adding Virgo to the LIGO detector network can increase by 50% the detection efficiency for this search. Another advantage of a joint LIGO-Virgo network is the ability to reconstruct the source sky position. The reconstruction accuracy depends on the timing measurement accuracy of the events in each interferometer, and is displayed in this paper with a fixed source position example

  12. A comparison of methods for gravitational wave burst searches from LIGO and Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Beauville, F; Buskulic, D; Grosjean, D [Laboratoire d' Annecy-le-Vieux de Physique des Particules, Chemin de Bellevue, BP 110, 74941 Annecy-le-Vieux Cedex (France); Bizouard, M-A; Cavalier, F; Clapson, A-C; Hello, P [Laboratoire de l' Accelerateur Lineaire, IN2P3/CNRS-Universite de Paris XI, BP 34, 91898 Orsay Cedex (France); Blackburn, L; Katsavounidis, E [LIGO-Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Bosi, L [INFN Sezione di Perugia and/or Universita di Perugia, Via A Pascoli, I-06123 Perugia (Italy); Brocco, L [INFN Sezione di Roma and/or Universita ' La Sapienza' , P le A Moro 2, I-00185 Roma (Italy); Brown, D A; Chatterji, S [LIGO-California Institute of Technology, Pasadena, CA 91125 (United States); Christensen, N; Knight, M [Carleton College, Northfield, MN 55057 (United States); Fairhurst, S [University of Wisconsin-Milwaukee, Milwaukee, WI 53201 (United States); Guidi, G [INFN Sezione Firenze/Urbino Via G Sansone 1, I-50019 Sesto Fiorentino (Italy); and/or Universita di Firenze, Largo E Fermi 2, I-50125 Firenze and/or Universita di Urbino, Via S Chiara 27, I-61029 Urbino (Italy); Heng, S; Hewitson, M [University of Glasgow, Glasgow, G12 8QQ (United Kingdom); Klimenko, S [University of Florida-Gainesville, FL 32611 (United States)] (and others)

    2008-02-21

    The search procedure for burst gravitational waves has been studied using 24 h of simulated data in a network of three interferometers (Hanford 4 km, Livingston 4 km and Virgo 3 km are the example interferometers). Several methods to detect burst events developed in the LIGO Scientific Collaboration (LSC) and Virgo Collaboration have been studied and compared. We have performed coincidence analysis of the triggers obtained in the different interferometers with and without simulated signals added to the data. The benefits of having multiple interferometers of similar sensitivity are demonstrated by comparing the detection performance of the joint coincidence analysis with LSC and Virgo only burst searches. Adding Virgo to the LIGO detector network can increase by 50% the detection efficiency for this search. Another advantage of a joint LIGO-Virgo network is the ability to reconstruct the source sky position. The reconstruction accuracy depends on the timing measurement accuracy of the events in each interferometer, and is displayed in this paper with a fixed source position example.

  13. Phase boundary estimation in electrical impedance tomography using the Hooke and Jeeves pattern search method

    International Nuclear Information System (INIS)

    Khambampati, Anil Kumar; Kim, Kyung Youn; Ijaz, Umer Zeeshan; Lee, Jeong Seong; Kim, Sin

    2010-01-01

    In industrial processes, monitoring of heterogeneous phases is crucial to the safety and operation of the engineering structures. Particularly, the visualization of voids and air bubbles is advantageous. As a result many studies have appeared in the literature that offer varying degrees of functionality. Electrical impedance tomography (EIT) has already been proved to be a hallmark for process monitoring and offers not only the visualization of the resistivity profile for a given flow mixture but is also used for detection of phase boundaries. Iterative image reconstruction algorithms, such as the modified Newton–Raphson (mNR) method, are commonly used as inverse solvers. However, their utility is problematic in a sense that they require the initial solution in close proximity of the ground truth. Furthermore, they also rely on the gradient information of the objective function to be minimized. Therefore, in this paper, we address all these issues by employing a direct search algorithm, namely the Hooke and Jeeves pattern search method, to estimate the phase boundaries that directly minimizes the cost function and does not require the gradient information. It is assumed that the resistivity profile is known a priori and therefore the unknown information will be the size and location of the object. The boundary coefficients are parameterized using truncated Fourier series and are estimated using the relationship between the measured voltages and injected currents. Through extensive simulation and experimental result and by comparison with mNR, we show that the Hooke and Jeeves pattern search method offers a promising prospect for process monitoring

  14. Applications of the Warren-Averbach method of X-ray diffraction line profile analysis

    International Nuclear Information System (INIS)

    Ichikawa, Rodrigo Uchida

    2013-01-01

    The objective of this work was to develop and implement a methodology of X-ray Line Profile Analysis (XLPA) for the study and determination of the mean crystallite sizes and microstrains in materials. A computer program was developed to speed up the treatment of diffraction peaks and perform the deconvolution utilizing the Stokes method to correct the instrumental contribution in the X-ray diffraction measurements. The XLPA methods used were the Scherrer, Williamson-Hall and Single-Line methods, which can be called real space methods, and the Fourier space method of Warren-Averbach. Furthermore, considering a mathematical modelling it was possible to calculate the crystallite size distribution, considering the log-normal distribution and spherical crystallites. It was possible to demonstrate the proposed theory can provide reliable results evaluating a dispersion parameter. The methodologies described above were applied in two distinct materials: in the alloy Zircaloy-4 and in ZnO. (author)

  15. A Sea-Sky Line Detection Method for Unmanned Surface Vehicles Based on Gradient Saliency.

    Science.gov (United States)

    Wang, Bo; Su, Yumin; Wan, Lei

    2016-04-15

    Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs) to detect the sea-sky line (SSL) accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF). In the end, the proposed method is tested on a benchmark dataset from the "XL" USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.

  16. A Sea-Sky Line Detection Method for Unmanned Surface Vehicles Based on Gradient Saliency

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2016-04-01

    Full Text Available Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs to detect the sea-sky line (SSL accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF. In the end, the proposed method is tested on a benchmark dataset from the “XL” USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.

  17. All roads lead to Rome - New search methods for the optimal triangulation problem

    Czech Academy of Sciences Publication Activity Database

    Ottosen, T. J.; Vomlel, Jiří

    2012-01-01

    Roč. 53, č. 9 (2012), s. 1350-1366 ISSN 0888-613X R&D Projects: GA MŠk 1M0572; GA ČR GEICC/08/E010; GA ČR GA201/09/1891 Grant - others:GA MŠk(CZ) 2C06019 Institutional support: RVO:67985556 Keywords : Bayesian networks * Optimal triangulation * Probabilistic inference * Cliques in a graph Subject RIV: BD - Theory of Information Impact factor: 1.729, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/vomlel-all roads lead to rome - new search methods for the optimal triangulation problem.pdf

  18. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    Science.gov (United States)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond

  19. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  20. Evaluation of the monitor cursor-line method for measuring pulmonary artery and central venous pressures.

    Science.gov (United States)

    Pasion, Editha; Good, Levell; Tizon, Jisebelle; Krieger, Staci; O'Kier, Catherine; Taylor, Nicole; Johnson, Jennifer; Horton, Carrie M; Peterson, Mary

    2010-11-01

    To determine if the monitor cursor-line feature on bedside monitors is accurate for measuring central venous and pulmonary artery pressures in cardiac surgery patients. Central venous and pulmonary artery pressures were measured via 3 methods (end-expiratory graphic recording, monitor cursor-line display, and monitor digital display) in a convenience sample of postoperative cardiac surgery patients. Pressures were measured twice during both mechanical ventilation and spontaneous breathing. Analysis of variance was used to determine differences between measurement methods and the percentage of monitor pressures that differed by 4 mm Hg or more from the measurement obtained from the graphic recording. Significance level was set at P less than .05. Twenty-five patients were studied during mechanical ventilation (50 measurements) and 21 patients during spontaneous breathing (42 measurements). Measurements obtained via the 3 methods did not differ significantly for either type of pressure (P > .05). Graphically recorded pressures and measurements obtained via the monitor cursor-line or digital display methods differed by 4 mm Hg or more in 4% and 6% of measurements, respectively, during mechanical ventilation and 4% and 11%, respectively, during spontaneous breathing. The monitor cursor-line method for measuring central venous and pulmonary artery pressures may be a reasonable alternative to the end-expiratory graphic recording method in hemodynamically stable, postoperative cardiac surgery patients. Use of the digital display on the bedside monitor may result in larger discrepancies from the graphically recorded pressures than when the cursor-line method is used, particularly in spontaneously breathing patients.

  1. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  2. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  3. A high speed electrohydrodynamic (EHD) jet printing method for line printing

    International Nuclear Information System (INIS)

    Phung, Thanh Huy; Kim, Seora; Kwon, Kye-Si

    2017-01-01

    Electrohydrodynamic (EHD) jet printing has drawn attention due to its capability to produce smaller dots and patterns with finer lines when compared to those obtained from using conventional inkjet printing. Previous studies have suggested that drop-on-demand EHD-patterning applications should be limited to very slow printing cases with speeds far less than 10 mm s −1 due to the small dot size and limited jetting frequency. In this study, a new EHD printing method is proposed to significantly increase the line-patterning printing speed by modifying the ink and thereby changing the relic shape. The proposed method has the additional advantage of reducing the line-pattern width. The results of the experiment show that the pattern width could be reduced from 20 µ m to 4 µ m by increasing the printing speed from 10 mm s −1 to 50 mm s −1 , respectively. (paper)

  4. Synthesis method based on solution regions for planar four bar straight line linkages

    International Nuclear Information System (INIS)

    Lai Rong, Yin; Cong, Mao; Jian you, Han; Tong, Yang; Juan, Huang

    2012-01-01

    An analytical method for synthesizing and selecting desired four-bar straight line mechanisms based on solution regions is presented. Given two fixed pivots, the point position and direction of the target straight line, an infinite number of mechanism solutions can be produced by employing this method, both in the general case and all three special cases. Unifying the straight line direction and the displacement from the given point to the instant center into the same form with different angles as parameters, infinite mechanism solutions can be expressed with different solution region charts. The mechanism property graphs have been computed to enable the designers to find out the involved mechanism information more intuitively and avoid aimlessness in selecting optimal mechanisms

  5. A dynamic lattice searching method with rotation operation for optimization of large clusters

    International Nuclear Information System (INIS)

    Wu Xia; Cai Wensheng; Shao Xueguang

    2009-01-01

    Global optimization of large clusters has been a difficult task, though much effort has been paid and many efficient methods have been proposed. During our works, a rotation operation (RO) is designed to realize the structural transformation from decahedra to icosahedra for the optimization of large clusters, by rotating the atoms below the center atom with a definite degree around the fivefold axis. Based on the RO, a development of the previous dynamic lattice searching with constructed core (DLSc), named as DLSc-RO, is presented. With an investigation of the method for the optimization of Lennard-Jones (LJ) clusters, i.e., LJ 500 , LJ 561 , LJ 600 , LJ 665-667 , LJ 670 , LJ 685 , and LJ 923 , Morse clusters, silver clusters by Gupta potential, and aluminum clusters by NP-B potential, it was found that both the global minima with icosahedral and decahedral motifs can be obtained, and the method is proved to be efficient and universal.

  6. MRS algorithm: a new method for searching myocardial region in SPECT myocardial perfusion images.

    Science.gov (United States)

    He, Yuan-Lie; Tian, Lian-Fang; Chen, Ping; Li, Bin; Mao, Zhong-Yuan

    2005-10-01

    First, the necessity of automatically segmenting myocardium from myocardial SPECT image is discussed in Section 1. To eliminate the influence of the background, the optimal threshold segmentation method modified for the MRS algorithm is explained in Section 2. Then, the image erosion structure is applied to identify the myocardium region and the liver region. The contour tracing method is introduced to extract the myocardial contour. To locate the centriod of the myocardium, the myocardial centriod searching method is developed. The protocol of the MRS algorithm is summarized in Section 6. The performance of the MRS algorithm is investigated and the conclusion is drawn in Section 7. Finally, the importance of the MRS algorithm and the improvement of the MRS algorithm are discussed.

  7. A FISH-based method for assessment of HER-2 amplification status in breast cancer circulating tumor cells following CellSearch isolation

    Directory of Open Access Journals (Sweden)

    Frithiof H

    2016-11-01

    Full Text Available Henrik Frithiof,1 Kristina Aaltonen,1 Lisa Rydén2,3 1Division of Oncology and Pathology, 2Division of Surgery, Department of Clinical Sciences Lund, Lund University, Lund, 3Department of Surgery, Skåne University Hospital, Malmö, Sweden Introduction: Amplification of the HER-2/neu (HER-2 proto-oncogene occurs in 10%–15% of primary breast cancer, leading to an activated HER-2 receptor, augmenting growth of cancer cells. Tumor classification is determined in primary tumor tissue and metastatic biopsies. However, malignant cells tend to alter their phenotype during disease progression. Circulating tumor cell (CTC analysis may serve as an alternative to repeated biopsies. The Food and Drug Administration-approved CellSearch system allows determination of the HER-2 protein, but not of the HER-2 gene. The aim of this study was to optimize a fluorescence in situ hybridization (FISH-based method to quantitatively determine HER-2 amplification in breast cancer CTCs following CellSearch-based isolation and verify the method in patient samples. Methods: Using healthy donor blood spiked with human epidermal growth factor receptor 2 (HER-2-positive breast cancer cell lines, SKBr-3 and BT-474, and a corresponding negative control (the HER-2-negative MCF-7 cell line, an in vitro CTC model system was designed. Following isolation in the CellSearch system, CTC samples were further enriched and fixed on microscope slides. Immunocytochemical staining with cytokeratin and 4',6-diamidino-2'-phenylindole dihydrochloride identified CTCs under a fluorescence microscope. A FISH-based procedure was optimized by applying the HER2 IQFISH pharmDx assay for assessment of HER-2 amplification status in breast cancer CTCs. Results: A method for defining the presence of HER-2 amplification in single breast cancer CTCs after CellSearch isolation was established using cell lines as positive and negative controls. The method was validated in blood from breast cancer patients

  8. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

    International Nuclear Information System (INIS)

    Fan, J Y; Wang, F G; W, Y; Zhang, Y L

    2006-01-01

    Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

  9. Unevenness of Sliding Surface of Overhead Rigid Conductor Lines and Method for Reducing Unevenness

    Science.gov (United States)

    Aboshi, Mitsuo; Shimizu, Masashi

    Rigid conductor lines are used in many subways, because the use of such conductor lines reduces the risk of accidents and because less space is required for their installation. However, as the unevenness of the sliding surface of the rigid conductor lines significantly influences the fluctuations in the contact force between pantographs and contact lines, it is necessary to decrease the unevenness at the construction as well as the maintenance stages. In order to investigate the installation accuracy of overhead rigid conductor lines, we have developed a device that accurately and continuously measures the unevenness of the sliding surface. By using this measuring device, we have confirmed that the unevenness of the sliding surface depends on various factors such as the sag between the support points, the deformation of the aluminum base or the conductive rail in the case of a long wavelength, the slight sagging unevenness between the bolts of the long ear, the undulating wear etc. This paper describes the actual unevenness conditions and the technical methods for decreasing the unevenness of the sliding surface of overhead rigid conductor lines.

  10. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  11. In vitro toxicity test and searching the possibility of cancer cell line extermination by magnetic heating with using Fe3O4 magnetic fluid

    International Nuclear Information System (INIS)

    Pham Hoai Linh; Nguyen Chi Thuan; Nguyen Anh Tuan; Pham Van Thach; Nguyen Xuan Phuc; Le Van Hong; Tran Cong Yen; Nguyen Thi Quy; Hoang Thi My Nhung; Phi Thi Xuyen

    2009-01-01

    A Fe 3 O 4 based magnetic fluid with different concentrations ranged between 0.15 ng/cell to 10 ng/cell (nano gram/cell) was used in the in vitro toxicity test on several cancer cell lines, Sarcoma 180, HeLa and H358. It shows that the fluid with a concentration of Fe 3 O 4 below 1.2 ng/cell is completely non-toxic for these cell lines. Even through in the presence of the highest concentration of 10 ng/cell, the cell viability still reaches more than 60%. The magnetic fluid with Fe 3 O 4 concentration of about 0.1 ng/cell was also used to search ex-vivo the possibility of Sarcoma 180 extermination by magnetic heating with an AC field of 120Oe and 184 KHz. The result shows that after a heat treatment for 30 min., 40% of Sarcoma 180 cells was killed.

  12. Analysing Infinite-State Systems by Combining Equivalence Reduction and the Sweep-Line Method

    DEFF Research Database (Denmark)

    Mailund, Thomas

    2002-01-01

    The sweep-line method is a state space exploration method for on-the-fly verification aimed at systems exhibiting progress. Presence of progress in the system makes it possible to delete certain states during state space generation, which reduces the memory used for storing the states. Unfortunat......The sweep-line method is a state space exploration method for on-the-fly verification aimed at systems exhibiting progress. Presence of progress in the system makes it possible to delete certain states during state space generation, which reduces the memory used for storing the states....... Unfortunately, the same progress that is used to improve memory performance in state space exploration often leads to an infinite state space: The progress in the system is carried over to the states resulting in infinitely many states only distinguished through the progress. A finite state space can...... property essential for the sweep-line method. We evaluate the new method on two case studies, showing significant improvements in performance, and we briefly discuss the new method in the context of Timed Coloured Petri Nets, where the “increasing global time” semantics can be exploited for more efficient...

  13. A non-iterative twin image elimination method with two in-line digital holograms

    Science.gov (United States)

    Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young

    2018-02-01

    We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.

  14. A comparison of alternative methods for estimating the self-thinning boundary line

    Science.gov (United States)

    Lianjun Zhang; Huiquan Bi; Jeffrey H. Gove; Linda S. Heath

    2005-01-01

    The fundamental validity of the self-thinning "law" has been debated over the last three decades. A long-sanding concern centers on how to objectively select data points for fitting the self-thinning line and the most appropriate regression method for estimating the two coefficients. Using data from an even-aged Pinus strobus L. stand as an...

  15. Validation of the actuator line method using near wake measurements of the MEXICO rotor

    DEFF Research Database (Denmark)

    Nilsson, Karl; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2015-01-01

    The purpose of the present work is to validate the capability of the actuator line method to compute vortex structures in the near wake behind the MEXICO experimental wind turbine rotor. In the MEXICO project/MexNext Annex, particle image velocimetry measurements have made it possible to determine...

  16. Theory of direct-interband-transition line shapes based on Mori's method

    International Nuclear Information System (INIS)

    Sam Nyung Yi; Jai Yon Ryu; Ok Hee Chung; Joung Young Sug; Sang Don Choi; Yeon Choon Chung

    1987-01-01

    A theory of direct interband optical transition in the electron-phonon system is introduced on the basis of the Kubo formalism and by using Mori's method of calculation. The line shape functions are introduced in two different ways and are compared with those obtained by Choi and Chung based on Argyres and Sigel's projection technique

  17. An R-peak detection method that uses an SVD filter and a search back system.

    Science.gov (United States)

    Jung, Woo-Hyuk; Lee, Sang-Goog

    2012-12-01

    In this paper, we present a method for detecting the R-peak of an ECG signal by using an singular value decomposition (SVD) filter and a search back system. The ECG signal was detected in two phases: the pre-processing phase and the decision phase. The pre-processing phase consisted of the stages for the SVD filter, Butterworth High Pass Filter (HPF), moving average (MA), and squaring, whereas the decision phase consisted of a single stage that detected the R-peak. In the pre-processing phase, the SVD filter removed noise while the Butterworth HPF eliminated baseline wander. The MA removed the remaining noise of the signal that had gone through the SVD filter to make the signal smooth, and squaring played a role in strengthening the signal. In the decision phase, the threshold was used to set the interval before detecting the R-peak. When the latest R-R interval (RRI), suggested by Hamilton et al., was greater than 150% of the previous RRI, the method of detecting the R-peak in such an interval was modified to be 150% or greater than the smallest interval of the two most latest RRIs. When the modified search back system was used, the error rate of the peak detection decreased to 0.29%, compared to 1.34% when the modified search back system was not used. Consequently, the sensitivity was 99.47%, the positive predictivity was 99.47%, and the detection error was 1.05%. Furthermore, the quality of the signal in data with a substantial amount of noise was improved, and thus, the R-peak was detected effectively. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Multi-line split DNA synthesis: a novel combinatorial method to make high quality peptide libraries

    Directory of Open Access Journals (Sweden)

    Ueno Shingo

    2004-09-01

    Full Text Available Abstract Background We developed a method to make a various high quality random peptide libraries for evolutionary protein engineering based on a combinatorial DNA synthesis. Results A split synthesis in codon units was performed with mixtures of bases optimally designed by using a Genetic Algorithm program. It required only standard DNA synthetic reagents and standard DNA synthesizers in three lines. This multi-line split DNA synthesis (MLSDS is simply realized by adding a mix-and-split process to normal DNA synthesis protocol. Superiority of MLSDS method over other methods was shown. We demonstrated the synthesis of oligonucleotide libraries with 1016 diversity, and the construction of a library with random sequence coding 120 amino acids containing few stop codons. Conclusions Owing to the flexibility of the MLSDS method, it will be able to design various "rational" libraries by using bioinformatics databases.

  19. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  20. SMM observations of gamma-ray transients. 3: A search for a broadened, redshifted positron annihilation line from the direction of the Galactic center

    Science.gov (United States)

    Harris, Michael J.; Share, Gerald H.; Leising, Mark D.

    1994-01-01

    We have searched for 1980-1988 Solar Maximum Mission gamma-ray spectrometer data for transient emission on timescales from hours to approximately 12 days of broad gamma-ray lines at energies approximately 400 keV, which were reported by the High Energy Astronomy Observatory (HEAO) 1 and SIGMA experiments from two sources lying toward the Galactic center. The lines have been interpreted as the product of the annihilation of positrons in pair plasmas surrounding the black hole candidate 1E 1740.7-2942 and the X-ray binary 1H 1822-371. Our results from a combined exposure of approximately 1.5 x 10(exp 7)s provide no convincing evidence for transient emission of this line on any timescale between approximately 9 hr and approximately 1 yr. Our 3 sigma upper limits on the line flux during approximately 12 day intervals are characteristically 4.8 x 10(exp -3) photon/sq cm/s, while for approximately 1 day intervals our 3 sigma upper limits are characteristically 4.9 x 10(exp -3) photon/sq cm/s. These results imply a duty cycle of less than 1.3% for the transient line measured from 1H 1822-371 during a approximately 3 week interval in 1977 by HEAO 1, and a duty cycle of less than or = 0.8% for the transient line detected in 1990 and 1992 from 1E 1740.7-2942 on approximately 1 day timescales by SIGMA.

  1. Methods to filter out spurious disturbances in continuous-wave searches from gravitational-wave detectors

    International Nuclear Information System (INIS)

    Leaci, Paola

    2015-01-01

    Semicoherent all-sky searches over year-long observation times for continuous gravitational wave signals produce various thousands of potential periodic source candidates. Efficient methods able to discard false candidate events are crucial in order to put all the efforts into a computationally intensive follow-up analysis for the remaining most promising candidates (Shaltev et al 2014 Phys. Rev. D 89 124030). In this paper we present a set of techniques able to fulfill such requirements, identifying and eliminating false candidate events, reducing thus the bulk of candidate sets that need to be further investigated. Some of these techniques were also used to streamline the candidate sets returned by the Einstein@Home hierarchical searches presented in (Aasi J et al (The LIGO Scientific Collaboration and the Virgo Collaboration) 2013 Phys. Rev. D 87 042001). These powerful methods and the benefits originating from their application to both simulated and on detector data from the fifth LIGO science run are illustrated and discussed. (paper)

  2. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  3. Reporting Quality of Search Methods in Systematic Reviews of HIV Behavioral Interventions (2000–2010): Are the Searches Clearly Explained, Systematic and Reproducible?

    Science.gov (United States)

    Mullins, Mary M.; DeLuca, Julia B.; Crepaz, Nicole; Lyles, Cynthia M.

    2018-01-01

    Systematic reviews are an essential tool for researchers, prevention providers and policy makers who want to remain current with the evidence in the field. Systematic review must adhere to strict standards, as the results can provide a more objective appraisal of evidence for making scientific decisions than traditional narrative reviews. An integral component of a systematic review is the development and execution of a comprehensive systematic search to collect available and relevant information. A number of reporting guidelines have been developed to ensure quality publications of systematic reviews. These guidelines provide the essential elements to include in the review process and report in the final publication for complete transparency. We identified the common elements of reporting guidelines and examined the reporting quality of search methods in HIV behavioral intervention literature. Consistent with the findings from previous evaluations of reporting search methods of systematic reviews in other fields, our review shows a lack of full and transparent reporting within systematic reviews even though a plethora of guidelines exist. This review underscores the need for promoting the completeness of and adherence to transparent systematic search reporting within systematic reviews. PMID:26052651

  4. LITERATURE SEARCH FOR METHODS FOR HAZARD ANALYSES OF AIR CARRIER OPERATIONS.

    Energy Technology Data Exchange (ETDEWEB)

    MARTINEZ - GURIDI,G.; SAMANTA,P.

    2002-07-01

    Representatives of the Federal Aviation Administration (FAA) and several air carriers under Title 14 of the Code of Federal Regulations (CFR) Part 121 developed a system-engineering model of the functions of air-carrier operations. Their analyses form the foundation or basic architecture upon which other task areas are based: hazard analyses, performance measures, and risk indicator design. To carry out these other tasks, models may need to be developed using the basic architecture of the Air Carrier Operations System Model (ACOSM). Since ACOSM encompasses various areas of air-carrier operations and can be used to address different task areas with differing but interrelated objectives, the modeling needs are broad. A literature search was conducted to identify and analyze the existing models that may be applicable for pursuing the task areas in ACOSM. The intent of the literature search was not necessarily to identify a specific model that can be directly used, but rather to identify relevant ones that have similarities with the processes and activities defined within ACOSM. Such models may provide useful inputs and insights in structuring ACOSM models. ACOSM simulates processes and activities in air-carrier operation, but, in a general framework, it has similarities with other industries where attention also has been paid to hazard analyses, emphasizing risk management, and in designing risk indicators. To assure that efforts in other industries are adequately considered, the literature search includes publications from other industries, e.g., chemical, nuclear, and process industries. This report discusses the literature search, the relevant methods identified and provides a preliminary assessment of their use in developing the models needed for the ACOSM task areas. A detailed assessment of the models has not been made. Defining those applicable for ACOSM will need further analyses of both the models and tools identified. The report is organized in four chapters

  5. Automated local line rolling forming and simplified deformation simulation method for complex curvature plate of ships

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2017-06-01

    Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.

  6. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    Science.gov (United States)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  7. Electron density profile measurements from hydrogen line intensity ratio method in Versatile Experimental Spherical Torus

    Energy Technology Data Exchange (ETDEWEB)

    Kim, YooSung; Shi, Yue-Jiang, E-mail: yjshi@snu.ac.kr; Yang, Jeong-hun; Kim, SeongCheol; Kim, Young-Gi; Dang, Jeong-Jeung; Yang, Seongmoo; Jo, Jungmin; Chung, Kyoung-Jae [Department of Nuclear Engineering, Seoul National University, Seoul 151-744 (Korea, Republic of); Oh, Soo-Ghee [Division of Energy Systems Research, Ajou University, Suwon 442-749 (Korea, Republic of); Hwang, Y. S. [Department of Nuclear Engineering, Seoul National University, Seoul 151-744 (Korea, Republic of); Center for Advanced Research in Fusion Reactor Engineering, Seoul National University, Seoul 151-744 (Korea, Republic of)

    2016-11-15

    Electron density profiles of versatile experiment spherical torus plasmas are measured by using a hydrogen line intensity ratio method. A fast-frame visible camera with appropriate bandpass filters is used to detect images of Balmer line intensities. The unique optical system makes it possible to take images of H{sub α} and H{sub β} radiation simultaneously, with only one camera. The frame rate is 1000 fps and the spatial resolution of the system is about 0.5 cm. One-dimensional local emissivity profiles have been obtained from the toroidal line of sight with viewing dumps. An initial result for the electron density profile is presented and is in reasonable agreement with values measured by a triple Langmuir probe.

  8. Analysis of secretome of breast cancer cell line with an optimized semi-shotgun method

    International Nuclear Information System (INIS)

    Tang Xiaorong; Yao Ling; Chen Keying; Hu Xiaofang; Xu Lisa; Fan Chunhai

    2009-01-01

    Secretome, the totality of secreted proteins, is viewed as a promising pool of candidate cancer biomarkers. Simple and reliable methods for identifying secreted proteins are highly desired. We used an optimized semi-shotgun liquid chromatography followed by tandem mass spectrometry (LC-MS/MS) method to analyze the secretome of breast cancer cell line MDA-MB-231. A total of 464 proteins were identified. About 63% of the proteins were classified as secreted proteins, including many promising breast cancer biomarkers, which were thought to be correlated with tumorigenesis, tumor development and metastasis. These results suggest that the optimized method may be a powerful strategy for cell line secretome profiling, and can be used to find potential cancer biomarkers with great clinical significance. (authors)

  9. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.

  10. The Method of Lines Solution of the Regularized Long-Wave Equation Using Runge-Kutta Time Discretization Method

    Directory of Open Access Journals (Sweden)

    H. O. Bakodah

    2013-01-01

    Full Text Available A method of lines approach to the numerical solution of nonlinear wave equations typified by the regularized long wave (RLW is presented. The method developed uses a finite differences discretization to the space. Solution of the resulting system was obtained by applying fourth Runge-Kutta time discretization method. Using Von Neumann stability analysis, it is shown that the proposed method is marginally stable. To test the accuracy of the method some numerical experiments on test problems are presented. Test problems including solitary wave motion, two-solitary wave interaction, and the temporal evaluation of a Maxwellian initial pulse are studied. The accuracy of the present method is tested with and error norms and the conservation properties of mass, energy, and momentum under the RLW equation.

  11. A novel method to alleviate flash-line defects in coining process

    KAUST Repository

    Xu, Jiangping

    2013-04-01

    We employ a finite element framework based on a dynamic explicit algorithm to predict the flash-line defects in the coining process. The distribution of the flash-line is obtained by building a radial friction work model at the element level. The elasto-plastic behavior of porous materials undergoing large deformations is considered where the constitutive level updates are the result of a local variational minimization problem. We study the material flow at different strokes of the die across the entire coining process and observe that the change in the flow direction of the material in the rim region may contribute to the flash lines. Our proposed framework shows that a part of the rim region in which the flash-line defects appear is consistent with the reported experimental results. We also propose a novel method of redesigning the rim geometry of the workpiece to alleviate the flash-line defects which also shows good agreement with experiments. © 2012 Elsevier Inc. All rights reserved.

  12. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  13. Uncertainty characterization of particle depth measurement using digital in-line holography and the hybrid method.

    Science.gov (United States)

    Gao, Jian; Guildenbecher, Daniel R; Reu, Phillip L; Chen, Jun

    2013-11-04

    In the detection of particles using digital in-line holography, measurement accuracy is substantially influenced by the hologram processing method. In particular, a number of methods have been proposed to determine the out-of-plane particle depth (z location). However, due to the lack of consistent uncertainty characterization, it has been unclear which method is best suited to a given measurement problem. In this work, depth determination accuracies of seven particle detection methods, including a recently proposed hybrid method, are systematically investigated in terms of relative depth measurement errors and uncertainties. Both synthetic and experimental holograms of particle fields are considered at conditions relevant to particle sizing and tracking. While all methods display a range of particle conditions where they are most accurate, in general the hybrid method is shown to be the most robust with depth uncertainty less than twice the particle diameter over a wide range of particle field conditions.

  14. Search methods that people use to find owners of lost pets.

    Science.gov (United States)

    Lord, Linda K; Wittum, Thomas E; Ferketich, Amy K; Funk, Julie A; Rajala-Schultz, Päivi J

    2007-06-15

    To characterize the process by which people who find lost pets search for the owners. Cross-sectional study. Sample Population-188 individuals who found a lost pet in Dayton, Ohio, between March 1 and June 30, 2006. Procedures-Potential participants were identified as a result of contact with a local animal agency or placement of an advertisement in the local newspaper. A telephone survey was conducted to identify methods participants used to find the pets' owners. 156 of 188 (83%) individuals completed the survey. Fifty-nine of the 156 (38%) pets were reunited with their owners; median time to reunification was 2 days (range, 0.5 to 45 days). Only 1 (3%) cat owner was found, compared with 58 (46%) dog owners. Pet owners were found as a result of information provided by an animal agency (25%), placement of a newspaper advertisement (24%), walking the neighborhood (19%), signs in the neighborhood (15%), information on a pet tag (10%), and other methods (7%). Most finders (87%) considered it extremely important to find the owner, yet only 13 (8%) initially surrendered the found pet to an animal agency. The primary reason people did not surrender found pets was fear of euthanasia (57%). Only 97 (62%) individuals were aware they could run a found-pet advertisement in the newspaper at no charge, and only 1 person who was unaware of the no-charge policy placed an advertisement. Veterinarians and shelters can help educate people who find lost pets about methods to search for the pets' owners.

  15. Is the 130 GeV line real? A search for systematics in the Fermi-LAT data

    Energy Technology Data Exchange (ETDEWEB)

    Finkbeiner, Douglas P.; Su, Meng [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States); Weniger, Christoph, E-mail: dfinkbeiner@cfa.harvard.edu, E-mail: mengsu@cfa.harvard.edu, E-mail: weniger@mpp.mpg.de [Max-Planck-Institut für Physik, Föhringer Ring 6, 80805 München (Germany)

    2013-01-01

    Our recent claims of a Galactic center feature in Fermi-LAT data at approximately 130 GeV have motivated a large number of papers proposing explanations ranging from dark matter annihilation to monoenergetic pulsar winds. Because of the importance of such interpretations for physics and astrophysics, a discovery will require not only additional data, but a thorough investigation of possible LAT systematics. While we do not have access to the details of each event reconstruction, we do have information about each event from the public event lists and spacecraft parameter files. These data allow us to search for suspicious trends that could indicate a spurious signal. We consider several hypotheses that might make an instrumental artifact more apparent at the Galactic center, and find them implausible. We also search for an instrumental signature in the Earth limb photons, which provide a smooth reference spectrum for null tests. We find no significant 130 GeV feature in the Earth limb sample. However, we do find a marginally significant 130 GeV feature in Earth limb photons with a limited range of detector incidence angles. This raises concerns about the 130 GeV Galactic center feature, even though we can think of no plausible model of instrumental behavior that connects the two. A modest amount of additional limb data would tell us if the limb feature is a statistical fluke. If the limb feature persists, it would raise doubts about the Pass 7 processing of E > 100 GeV events. At present we find no instrumental systematics that could plausibly explain the excess Galactic center emission at 130 GeV.

  16. Exploring genomic dark matter: A critical assessment of the performance of homology search methods on noncoding RNA

    DEFF Research Database (Denmark)

    Freyhult, E.; Bollback, J. P.; Gardner, P. P.

    2006-01-01

    Homology search is one of the most ubiquitous bioinformatic tasks, yet it is unknown how effective the currently available tools are for identifying noncoding RNAs (ncRNAs). In this work, we use reliable ncRNA data sets to assess the effectiveness of methods such as BLAST, FASTA, HMMer, and Infer......Homology search is one of the most ubiquitous bioinformatic tasks, yet it is unknown how effective the currently available tools are for identifying noncoding RNAs (ncRNAs). In this work, we use reliable ncRNA data sets to assess the effectiveness of methods such as BLAST, FASTA, HMMer......, and Infernal. Surprisingly, the most popular homology search methods are often the least accurate. As a result, many studies have used inappropriate tools for their analyses. On the basis of our results, we suggest homology search strategies using the currently available tools and some directions for future...

  17. Location of Faults in Power Transmission Lines Using the ARIMA Method

    Directory of Open Access Journals (Sweden)

    Danilo Pinto Moreira de Souza

    2017-10-01

    Full Text Available One of the major problems in transmission lines is the occurrence of failures that affect the quality of the electric power supplied, as the exact localization of the fault must be known for correction. In order to streamline the work of maintenance teams and standardize services, this paper proposes a method of locating faults in power transmission lines by analyzing the voltage oscillographic signals extracted at the line monitoring terminals. The developed method relates time series models obtained specifically for each failure pattern. The parameters of the autoregressive integrated moving average (ARIMA model are estimated in order to adjust the voltage curves and calculate the distance from the initial fault localization to the terminals. Simulations of the failures are performed through the ATPDraw ® (5.5 software and the analyses were completed using the RStudio ® (1.0.143 software. The results obtained with respect to the failures, which did not involve earth return, were satisfactory when compared with widely used techniques in the literature, particularly when the fault distance became larger in relation to the beginning of the transmission line.

  18. Runge-Kutta Integration of the Equal Width Wave Equation Using the Method of Lines

    Directory of Open Access Journals (Sweden)

    M. A. Banaja

    2015-01-01

    Full Text Available The equal width (EW equation governs nonlinear wave phenomena like waves in shallow water. Numerical solution of the (EW equation is obtained by using the method of lines (MOL based on Runge-Kutta integration. Using von Neumann stability analysis, the scheme is found to be unconditionally stable. Solitary wave motion and interaction of two solitary waves are studied using the proposed method. The three invariants of the motion are evaluated to determine the conservation properties of the generated scheme. Accuracy of the proposed method is discussed by computing the L2 and L∞ error norms. The results are found in good agreement with exact solution.

  19. A new image reconstruction method for 3-D PET based upon pairs of near-missing lines of response

    Energy Technology Data Exchange (ETDEWEB)

    Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456-8611 (Japan) and Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474-8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644-0023 (Japan)

    2007-02-01

    We formerly introduced a new image reconstruction method for three-dimensional positron emission tomography, which is based upon pairs of near-missing lines of response. This method uses an elementary geometric property of lines of response, namely that two lines of response which originate from radioactive isotopes located within a sufficiently small voxel, will lie within a few millimeters of each other. The effectiveness of this method was verified by performing a simulation using GATE software and a digital Hoffman phantom.

  20. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  1. A meta-heuristic method for solving scheduling problem: crow search algorithm

    Science.gov (United States)

    Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi

    2018-04-01

    Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.

  2. Adjusting the Parameters of Metal Oxide Gapless Surge Arresters’ Equivalent Circuits Using the Harmony Search Method

    Directory of Open Access Journals (Sweden)

    Christos A. Christodoulou

    2017-12-01

    Full Text Available The appropriate circuit modeling of metal oxide gapless surge arresters is critical for insulation coordination studies. Metal oxide arresters present a dynamic behavior for fast front surges; namely, their residual voltage is dependent on the peak value, as well as the duration of the injected impulse current, and should therefore not only be represented by non-linear elements. The aim of the current work is to adjust the parameters of the most frequently used surge arresters’ circuit models by considering the magnitude of the residual voltage, as well as the dissipated energy for given pulses. In this aim, the harmony search method is implemented to adjust parameter values of the arrester equivalent circuit models. This functions by minimizing a defined objective function that compares the simulation outcomes with the manufacturer’s data and the results obtained from previous methodologies.

  3. A method in search of a theory: peer education and health promotion.

    Science.gov (United States)

    Turner, G; Shepherd, J

    1999-04-01

    Peer education has grown in popularity and practice in recent years in the field of health promotion. However, advocates of peer education rarely make reference to theories in their rationale for particular projects. In this paper the authors review a selection of commonly cited theories, and examine to what extent they have value and relevance to peer education in health promotion. Beginning from an identification of 10 claims made for peer education, each theory is examined in terms of the scope of the theory and evidence to support it in practice. The authors conclude that, whilst most theories have something to offer towards an explanation of why peer education might be effective, most theories are limited in scope and there is little empirical evidence in health promotion practice to support them. Peer education would seem to be a method in search of a theory rather than the application of theory to practice.

  4. A gravitational wave burst search method based on the S transform

    International Nuclear Information System (INIS)

    Clapson, Andre-Claude; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelberg, Stephane; Varvella, Monica

    2005-01-01

    The detection of burst-type events in the output of ground gravitational wave observatories is particularly challenging due to the expected variety of astrophysical waveforms and the issue of discriminating them from instrumental noise. Robust methods, that achieve reasonable detection performances over a wide range of signals, would be most useful. We present a burst-detection pipeline based on a time-frequency transform, the S transform. This transform offers good time-frequency localization of energy without requiring prior knowledge of the event structure. We set a simple (and robust) event extraction chain. Results are provided for a variety of signals injected in simulated Gaussian statistics data (from the LIGO-Virgo joint working group). Indications are that detection is robust with respect to event type and that efficiency compares reasonably with reference methods. The time-frequency representation is shown to be affected by spectral features such as resonant lines. This emphasizes the role of pre-processing

  5. A gravitational wave burst search method based on the S transform

    Energy Technology Data Exchange (ETDEWEB)

    Clapson, Andre-Claude; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelberg, Stephane; Varvella, Monica [Groupe Virgo, LAL, Universite Paris-Sud, Batiment 208, BP 34, F-91898 Orsay Cedex (France)

    2005-09-21

    The detection of burst-type events in the output of ground gravitational wave observatories is particularly challenging due to the expected variety of astrophysical waveforms and the issue of discriminating them from instrumental noise. Robust methods, that achieve reasonable detection performances over a wide range of signals, would be most useful. We present a burst-detection pipeline based on a time-frequency transform, the S transform. This transform offers good time-frequency localization of energy without requiring prior knowledge of the event structure. We set a simple (and robust) event extraction chain. Results are provided for a variety of signals injected in simulated Gaussian statistics data (from the LIGO-Virgo joint working group). Indications are that detection is robust with respect to event type and that efficiency compares reasonably with reference methods. The time-frequency representation is shown to be affected by spectral features such as resonant lines. This emphasizes the role of pre-processing.

  6. Application of active neutronic interrogation method to the line analysis in reprocessing plant

    International Nuclear Information System (INIS)

    Passard, C.

    1993-01-01

    In a reprocessing plant of irradiated spent fuels, the knowledge in real time (line analysis) of uranium and plutonium quantities present in solutions is an extremely important parameter to control the proceeding and for the apparatus safety. The active neutronic analysis give a nondestructive non intrusive and quick measure to know the concentrations. This method consists in inducing fissions in nuclides with a neutron source and then to detect the particles which come from

  7. A Modified Generalized Laguerre Spectral Method for Fractional Differential Equations on the Half Line

    Directory of Open Access Journals (Sweden)

    D. Baleanu

    2013-01-01

    fractional derivatives is based on modified generalized Laguerre polynomials Li(α,β(x with x∈Λ=(0,∞, α>−1, and β>0, and i is the polynomial degree. We implement and develop the modified generalized Laguerre collocation method based on the modified generalized Laguerre-Gauss points which is used as collocation nodes for solving nonlinear multiterm FDEs on the half line.

  8. Structural evaluation method study and procedure development for pressurizer surge line subjected to thermal stratification phenomenon

    International Nuclear Information System (INIS)

    Zhang Yixiong; Yu Xiaofei; Ai Honglei

    2014-01-01

    Thermal stratification phenomenon of pressurizer surge line can lead potential threaten to plant safety. Base on the mechanism of thermal stratification occurrence, Fr number is used to judge whether the stratification occurs or not. Also the method of calculating heat transfer coefficient is investigated. Theoretically the 3-dimension thermal stress induced by thermal stratification is decoupled to 1-dimension global stress and 2-dimension local stress, and the complex 3-dimension problem is simplified into a combination of 1-dimension and 2-dimension to compute the stress. Comply with criterion RCC-M, the complete structure integrity evaluation is accomplished after combining the stress produced by thermal stratification and the stresses produced by the other loadings. In order to match the above combined analysis method, Code SYSTUS and ROCOCO are developed. By means of aforesaid evaluation method and corresponding analysis program, surge line thermal stratification of Qinshan Phase II Extension project is investigated in this paper. And the results show that structural integrity of the pressurizer surge line affected by thermal stratification still satisfies criterion RCC-M. (authors)

  9. Non-Viral Transfection Methods Optimized for Gene Delivery to a Lung Cancer Cell Line

    Science.gov (United States)

    Salimzadeh, Loghman; Jaberipour, Mansooreh; Hosseini, Ahmad; Ghaderi, Abbas

    2013-01-01

    Background Mehr-80 is a newly established adherent human large cell lung cancer cell line that has not been transfected until now. This study aims to define the optimal transfection conditions and effects of some critical elements for enhancing gene delivery to this cell line by utilizing different non-viral transfection Procedures. Methods In the current study, calcium phosphate (CaP), DEAE-dextran, superfect, electroporation and lipofection transfection methods were used to optimize delivery of a plasmid construct that expressed Green Fluorescent Protein (GFP). Transgene expression was detected by fluorescent microscopy and flowcytometry. Toxicities of the methods were estimated by trypan blue staining. In order to evaluate the density of the transfected gene, we used a plasmid construct that expressed the Stromal cell-Derived Factor-1 (SDF-1) gene and measured its expression by real-time PCR. Results Mean levels of GFP-expressing cells 48 hr after transfection were 8.4% (CaP), 8.2% (DEAE-dextran), 4.9% (superfect), 34.1% (electroporation), and 40.1% (lipofection). Lipofection had the highest intense SDF-1 expression of the analyzed methods. Conclusion This study has shown that the lipofection and electroporation methods were more efficient at gene delivery to Mehr-80 cells. The quantity of DNA per transfection, reagent concentration, and incubation time were identified as essential factors for successful transfection in all of the studied methods. PMID:23799175

  10. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  11. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2016-02-01

    Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  12. Searching for Effective Training Solutions for Firefighting: The Analysis of Emergency Responses and Line of Duty Death Reports for Low Frequency, High Risk Events

    Science.gov (United States)

    2017-09-01

    From the more common problems of pre-existing hazardous structures to the rare events of temperature inversion creating a dangerous environment...the Academy’s burn building. These training scenarios allow firefighters to practice search methods while simultaneously dealing with the heat, smoke...fire behavior (RFB), 32 • secondary collapse (SC), • standard op procedures training (SOP), • temperature inversion (TI), • unrestricted flow

  13. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  14. In situ baking method for degassing of a kicker magnet in accelerator beam line

    International Nuclear Information System (INIS)

    Kamiya, Junichiro; Ogiwara, Norio; Yanagibashi, Toru; Kinsho, Michikazu; Yasuda, Yuichi

    2016-01-01

    In this study, the authors propose a new in situ degassing method by which only kicker magnets in the accelerator beam line are baked out without raising the temperature of the vacuum chamber to prevent unwanted thermal expansion of the chamber. By simply installing the heater and thermal radiation shield plates between the kicker magnet and the chamber wall, most of the heat flux from the heater directs toward the kicker magnet. The result of the verification test showed that each part of the kicker magnet was heated to above the target temperature with a small rise in the vacuum chamber temperature. A graphite heater was selected in this application to bake-out the kicker magnet in the beam line to ensure reliability and easy maintainability of the heater. The vacuum characteristics of graphite were suitable for heater operation in the beam line. A preliminary heat-up test conducted in the accelerator beam line also showed that each part of the kicker magnet was successfully heated and that thermal expansion of the chamber was negligibly small

  15. In situ baking method for degassing of a kicker magnet in accelerator beam line

    Energy Technology Data Exchange (ETDEWEB)

    Kamiya, Junichiro, E-mail: kamiya.junichiro@jaea.go.jp; Ogiwara, Norio; Yanagibashi, Toru; Kinsho, Michikazu [Japan Atomic Energy Agency, J-PARC Center, Ooaza Shirakata 2-4, Tokai, Naka, Ibaraki 319-1195 (Japan); Yasuda, Yuichi [SAKAGUCHI E.H VOC CORP., Sakura Dai-san Kogyodanchi 1-8-6, Osaku, Sakura, Chiba 285-0802 (Japan)

    2016-03-15

    In this study, the authors propose a new in situ degassing method by which only kicker magnets in the accelerator beam line are baked out without raising the temperature of the vacuum chamber to prevent unwanted thermal expansion of the chamber. By simply installing the heater and thermal radiation shield plates between the kicker magnet and the chamber wall, most of the heat flux from the heater directs toward the kicker magnet. The result of the verification test showed that each part of the kicker magnet was heated to above the target temperature with a small rise in the vacuum chamber temperature. A graphite heater was selected in this application to bake-out the kicker magnet in the beam line to ensure reliability and easy maintainability of the heater. The vacuum characteristics of graphite were suitable for heater operation in the beam line. A preliminary heat-up test conducted in the accelerator beam line also showed that each part of the kicker magnet was successfully heated and that thermal expansion of the chamber was negligibly small.

  16. On-line method to identify control rod drops in Pressurized Water Reactors

    International Nuclear Information System (INIS)

    Souza, T.J.; Martinez, A.S.; Medeiros, J.A.C.C.; Palma, D.A.P.; Gonçalves, A.C.

    2014-01-01

    Highlights: • On-line method to identify control rod drops in PWR reactors. • Identification method based on the readings of the ex-core detector. • Recognition of the patterns in the ex-core detector responses. - Abstract: A control rod drop event in PWR reactors leads to an unsafe operating condition. It is important to quickly identify the rod to minimise undesirable effects in such a scenario. The goal of this work is to develop an online method to identify control rod drops in PWR reactors. The method entails the construction of a tool based on ex-core detector responses. It proposes to recognize patterns in the neutron ex-core detectors responses and thus to make an online identification of a control rod drop in the core during the reactor operation. The results of the study, as well as the behaviour of the detector responses demonstrated the feasibility of this method

  17. Singularity Processing Method of Microstrip Line Edge Based on LOD-FDTD

    Directory of Open Access Journals (Sweden)

    Lei Li

    2014-01-01

    Full Text Available In order to improve the performance of the accuracy and efficiency for analyzing the microstrip structure, a singularity processing method is proposed theoretically and experimentally based on the fundamental locally one-dimensional finite difference time domain (LOD-FDTD with second-order temporal accuracy (denoted as FLOD2-FDTD. The proposed method can highly improve the performance of the FLOD2-FDTD even when the conductor is embedded into more than half of the cell by the coordinate transformation. The experimental results showed that the proposed method can achieve higher accuracy when the time step size is less than or equal to 5 times of that the Courant-Friedrich-Levy (CFL condition allowed. In comparison with the previously reported methods, the proposed method for calculating electromagnetic field near microstrip line edge not only improves the efficiency, but also can provide a higher accuracy.

  18. A study for the practical methods and evaluations for the on-line maintenance

    International Nuclear Information System (INIS)

    Chigusa, Naoki; Ogami, Takahiro

    2010-01-01

    For planning and enforcing the On Line Maintenance (OLM) in the near future, we should take care of keeping a reasonable balance between the degree of risk increase and the task volume equalization. In this paper, the current status of studying for the OLM introducing on Japanese utilities is described. The planning approach of OLM is examined on the assumption that planned OLM can be carried out. The practical methods and evaluations for the OLM are described, and the trial evaluation for this method is also described. (author)

  19. Evolutionary Policy Transfer and Search Methods for Boosting Behavior Quality: RoboCup Keep-Away Case Study

    Directory of Open Access Journals (Sweden)

    Geoff Nitschke

    2017-11-01

    Full Text Available This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior transfer across increasingly complex collective robotic (RoboCup keep-away tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function, behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.

  20. Non-viral transfection methods optimized for gene delivery to a lung cancer cell line.

    Science.gov (United States)

    Salimzadeh, Loghman; Jaberipour, Mansooreh; Hosseini, Ahmad; Ghaderi, Abbas

    2013-04-01

    Mehr-80 is a newly established adherent human large cell lung cancer cell line that has not been transfected until now. This study aims to define the optimal transfection conditions and effects of some critical elements for enhancing gene delivery to this cell line by utilizing different non-viral transfection Procedures. In the current study, calcium phosphate (CaP), DEAE-dextran, superfect, electroporation and lipofection transfection methods were used to optimize delivery of a plasmid construct that expressed Green Fluorescent Protein (GFP). Transgene expression was detected by fluorescent microscopy and flowcytometry. Toxicities of the methods were estimated by trypan blue staining. In order to evaluate the density of the transfected gene, we used a plasmid construct that expressed the Stromal cell-Derived Factor-1 (SDF-1) gene and measured its expression by real-time PCR. Mean levels of GFP-expressing cells 48 hr after transfection were 8.4% (CaP), 8.2% (DEAE-dextran), 4.9% (superfect), 34.1% (electroporation), and 40.1% (lipofection). Lipofection had the highest intense SDF-1 expression of the analyzed methods. This study has shown that the lipofection and electroporation methods were more efficient at gene delivery to Mehr-80 cells. The quantity of DNA per transfection, reagent concentration, and incubation time were identified as essential factors for successful transfection in all of the studied methods.

  1. STATCONT: A statistical continuum level determination method for line-rich sources

    Science.gov (United States)

    Sánchez-Monge, Á.; Schilke, P.; Ginsburg, A.; Cesaroni, R.; Schmiedeke, A.

    2018-01-01

    STATCONT is a python-based tool designed to determine the continuum emission level in spectral data, in particular for sources with a line-rich spectrum. The tool inspects the intensity distribution of a given spectrum and automatically determines the continuum level by using different statistical approaches. The different methods included in STATCONT are tested against synthetic data. We conclude that the sigma-clipping algorithm provides the most accurate continuum level determination, together with information on the uncertainty in its determination. This uncertainty can be used to correct the final continuum emission level, resulting in the here called `corrected sigma-clipping method' or c-SCM. The c-SCM has been tested against more than 750 different synthetic spectra reproducing typical conditions found towards astronomical sources. The continuum level is determined with a discrepancy of less than 1% in 50% of the cases, and less than 5% in 90% of the cases, provided at least 10% of the channels are line free. The main products of STATCONT are the continuum emission level, together with a conservative value of its uncertainty, and datacubes containing only spectral line emission, i.e., continuum-subtracted datacubes. STATCONT also includes the option to estimate the spectral index, when different files covering different frequency ranges are provided.

  2. The method of diagnosis and classification of the gingival line defects of the teeth hard tissues

    Directory of Open Access Journals (Sweden)

    Olena Bulbuk

    2017-06-01

    Full Text Available For solving the problem of diagnosis and treatment of hard tissue defects the significant role belongs to the choice of tactics for dental treatment of hard tissue defects located in the gingival line of any tooth. This work aims to study the problems of diagnosis and classification of gingival line defects of the teeth hard tissues. That will contribute to the objectification of differentiated diagnostic and therapeutic approaches in the dental treatment of various clinical variants of these defects localization. The objective of the study – is to develop the anatomical-functional classification for differentiated estimation of hard tissue defects in the gingival part, as the basis for the application of differential diagnostic-therapeutic approaches to the dental treatment of hard tissue defects disposed in the gingival part of any tooth. Materials and methods of investigation: There was conducted the examination of 48 patients with hard tissue defects located in the gingival part of any tooth. To assess the magnitude of gingival line destruction the periodontal probe and X-ray examination were used. Results. The result of the performed research the classification of the gingival line defects of the hard tissues was offered using exponent power. The value of this indicator is equal to an integer number expressed in millimeters of distance from the epithelial attachment to the cavity’s bottom of defect. Conclusions. The proposed classification fills an obvious gap in academic representations about hard tissue defects located in the gingival part of any tooth. Also it offers the prospects of consensus on differentiated diagnostic-therapeutic approaches in different clinical variants of location.  This classification builds methodological “bridge of continuity” between therapeutic and prosthetic dentistry in the field of treatment of the gingival line defects of dental hard tissues.

  3. Investigations on search methods for speech recognition using weighted finite state transducers

    OpenAIRE

    Rybach, David

    2014-01-01

    The search problem in the statistical approach to speech recognition is to find the most likely word sequence for an observed speech signal using a combination of knowledge sources, i.e. the language model, the pronunciation model, and the acoustic models of phones. The resulting search space is enormous. Therefore, an efficient search strategy is required to compute the result with a feasible amount of time and memory. The structured statistical models as well as their combination, the searc...

  4. Elliptical tiling method to generate a 2-dimensional set of templates for gravitational wave search

    International Nuclear Information System (INIS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-01-01

    Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method that allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated with a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the deexcitation phase of black holes - the so-called 'ringdown' signals - and are also encountered in some numerically computed supernova signals. First, the number of templates N computed by the method is similar to its analytical estimation, despite the overlaps between neighbor templates and the border effects. Moreover, N is small enough to test for the first time the performances of the set of templates for different choices of the minimal match MM, the parameter used to define the maximal allowed loss of signal-to-noise ratio (SNR) due to the mismatch between real signals and templates. The main result of this analysis is that the fraction of SNR recovered is on average much higher than MM, which dramatically decreases the mean percentage of false dismissals. Indeed, it goes well below its estimated value of 1-MM 3 used as input of the algorithm. Thus, as this feature should be common to any tiling algorithm, it seems possible to reduce the constraint on the value of MM - and indeed the number of templates and the computing power - without losing as many events as expected on average. This should be of great

  5. MHOs toward HMOs: A Search for Molecular Hydrogen Emission-Line Objects toward High-mass Outflows

    Energy Technology Data Exchange (ETDEWEB)

    Wolf-Chase, Grace [Astronomy Department Adler Planetarium 1300 S. Lake Shore Drive Chicago, IL 60605 (United States); Arvidsson, Kim [Trull School of Sciences and Mathematics Schreiner University 2100 Memorial Blvd. Kerrville, TX 78028 (United States); Smutko, Michael, E-mail: gwolfchase@adlerplanetarium.org [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), and Dept. of Physics and Astronomy, Northwestern University, 2131 Tech Drive, Evanston, IL 60208 (United States)

    2017-07-20

    We present the results of a narrow-band near-infrared imaging survey for Molecular Hydrogen emission-line Objects (MHOs) toward 26 regions containing high-mass protostellar candidates and massive molecular outflows. We have detected a total of 236 MHOs, 156 of which are new detections, in 22 out of the 26 regions. We use H{sub 2} 2.12 μ m/H{sub 2} 2.25 μ m flux ratios, together with morphology, to separate the signatures of fluorescence associated with photo-dissociation regions (PDRs) from shocks associated with outflows in order to identify the MHOs. PDRs have typical low flux ratios of ∼1.5–3, while the vast majority of MHOs display flux ratios typical of C-type shocks (∼6–20). A few MHOs exhibit flux ratios consistent with expected values for J-type shocks (∼3–4), but these are located in regions that may be contaminated with fluorescent emission. Some previously reported MHOs have low flux ratios, and are likely parts of PDRs rather than shocks indicative of outflows. We identify a total of 36 outflows across the 22 target regions where MHOs were detected. In over half these regions, MHO arrangements and fluorescent structures trace features present in CO outflow maps, suggesting that the CO emission traces a combination of dynamical effects, which may include gas entrained in expanding PDRs as well as bipolar outflows. Where possible, we link MHO complexes to distinct outflows and identify candidate driving sources.

  6. Obtaining macroscopic quantities for the contact line problem from Density Functional Theory using asymptotic methods

    Science.gov (United States)

    Sibley, David; Nold, Andreas; Kalliadasis, Serafim

    2015-11-01

    Density Functional Theory (DFT), a statistical mechanics of fluids approach, captures microscopic details of the fluid density structure in the vicinity of contact lines, as seen in computations in our recent study. Contact lines describe the location where interfaces between two fluids meet solid substrates, and have stimulated a wealth of research due to both their ubiquity in nature and technological applications and also due to their rich multiscale behaviour. Whilst progress can be made computationally to capture the microscopic to mesoscopic structure from DFT, complete analytical results to fully bridge to the macroscale are lacking. In this work, we describe our efforts to bring asymptotic methods to DFT to obtain results for contact angles and other macroscopic quantities in various parameter regimes. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.

  7. Apparatus and method of measuring fluctuations of excavated mud amount in a slurry line

    International Nuclear Information System (INIS)

    Yamazaki, H.; Kubota, R.; Uchida, Y.; Kasuya, T.; Seki, N.

    1976-01-01

    An apparatus and method for measuring fluctuations in amount of soil in slurry or soil-containing fluid line is described. Each system of feeding the slurry typically to tunneling face and draining it therefrom in the slurry line is provided with gamma-ray densimeter and electromagnetic flow-meter to obtain respective amounts of soil only (dry-soil amounts) in the slurry flowing through each of said systems from respective outputs of these meters in each system, so that actually excavated amount through ground layer of a shielded excavator at the tunneling face can be measured by the difference between the respective dry-soil amounts. The excavator will be advanced depending on this measured amount

  8. Evolution of longshore beach contour lines determined by E.O.F. method

    Directory of Open Access Journals (Sweden)

    Juan J. Muñoz-Pérez

    2001-12-01

    Full Text Available Detailed topo-bathymetric levellings were performed biannually for four years at Victoria Beach (Cadiz, Spain after a beach renourishment carried out in Spring 1991. The subsequent time series were analysed using the Empirical Orthogonal Functions (EOF method. The evolution of some characteristic longshore contour lines, such as the Highest High Water Level and the Lowest Low Water Level, is studied. The mean coastal line is related to the first spatial EOF mode. Furthermore, an objective criterion for distinguishing between a generalised recession and cyclic accretion-erosion processes due to seasonal sea-swell changes is described, and a uniformly clockwise turn of the shoreline to minimise longshore transport is identified.

  9. Application of a heuristic search method for generation of fuel reload configurations

    International Nuclear Information System (INIS)

    Galperin, A.; Nissan, E.

    1988-01-01

    A computerized heuristic search method for the generation and optimization of fuel reload configurations is proposed and investigated. The heuristic knowledge is expressed modularly in the form of ''IF-THEN'' production rules. The method was implemented in a program coded in the Franz LISP programming language and executed under the UNIX operating system. A test problem was formulated, based on a typical light water reactor reload problem with a few simplifications assumed, in order to allow formulation of the reload strategy into a relatively small number of rules. A computer run of the problem was performed with a VAX-780 machine. A set of 312 solutions was generated in -- 20 min of execution time. Testing of a few arbitrarily chosen configurations demonstrated reasonably good performance for the computer-generated solutions. A computerized generator of reload configurations may be used for the fast generation or modification of reload patterns and as a tool for the formulation, tuning, and testing of the heuristic knowledge rules used by an ''expert'' fuel manager

  10. Gravity Search Algorithm hybridized Recursive Least Square method for power system harmonic estimation

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Singh

    2017-06-01

    Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.

  11. Search for the top quark at D0 using multivariate methods

    International Nuclear Information System (INIS)

    Bhat, P.C.

    1995-07-01

    We report on the search for the top quark in p bar p collisions at the Fermilab Tevatron (√s = 1.8 TeV) in the di-lepton and lepton+jets channels using multivariate methods. An H-matrix analysis of the eμ data corresponding to an integrated luminosity of 13.5±1.6 pb -1 yields one event whose likelihood to be a top quark event, assuming m top = 180 GeV/c 2 , is ten times more than that of WW and eighteen times more than that of Z → ττ. A neural network analysis of the e+jets channel using a data sample corresponding to an integrated luminosity of 47.9±5.7 pb -1 shows an excess of events in the signal region and yields a cross-section for t bar t production of 6.7±2.3 (stat.) pb, assuming a top mass of 200 GeV/c 2 . An analysis of the e+jets data using the probability density estimation method yields a cross-section that is consistent with the above result

  12. Validation of a search strategy to identify nutrition trials in PubMed using the relative recall method.

    Science.gov (United States)

    Durão, Solange; Kredo, Tamara; Volmink, Jimmy

    2015-06-01

    To develop, assess, and maximize the sensitivity of a search strategy to identify diet and nutrition trials in PubMed using relative recall. We developed a search strategy to identify diet and nutrition trials in PubMed. We then constructed a gold standard reference set to validate the identified trials using the relative recall method. Relative recall was calculated by dividing the number of references from the gold standard our search strategy identified by the total number of references in the gold standard. Our gold standard comprised 298 trials, derived from 16 included systematic reviews. The initial search strategy identified 242 of 298 references, with a relative recall of 81.2% [95% confidence interval (CI): 76.3%, 85.5%]. We analyzed titles and abstracts of the 56 missed references for possible additional terms. We then modified the search strategy accordingly. The relative recall of the final search strategy was 88.6% (95% CI: 84.4%, 91.9%). We developed a search strategy to identify diet and nutrition trials in PubMed with a high relative recall (sensitivity). This could be useful for establishing a nutrition trials register to support the conduct of future research, including systematic reviews. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Advantages and Disadvantages of Possible Methods of Managing the Production on the Assembly Lines in the Given Company

    OpenAIRE

    Václavková, Eva

    2014-01-01

    The bachelor thesis deals with methods of lean management, more exactly production planning and managing the production lines assembly. In this thesis, production planning system, kanban cards and heijunka will be described. The aim of this thesis is to describe methods of managing the production lines assembly and to suggest more effective management.

  14. An Exploration of Retrieval-Enhancing Methods for Integrated Search in a Digital Library

    DEFF Research Database (Denmark)

    Sørensen, Diana Ransgaard; Bogers, Toine; Larsen, Birger

    2012-01-01

    Integrated search is defined as searching across different document types and representations simultaneously, with the goal of presenting the user with a single ranked result list containing the optimal mix of document types. In this paper, we compare various approaches to integrating three diffe...

  15. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  16. The BIOSIS data base: Evaluation of its indexes and the STRATBLDR, CHEMFILE, STAIRS and DIALOG systems for on-line searching

    Science.gov (United States)

    Nees, M.; Green, H. O.

    1977-01-01

    An IBM-developed program, STAIRS, was selected for performing a search on the BIOSIS file. The evaluation of the hardware and search systems and the strategies used are discussed. The searches are analyzed by type of end user.

  17. A conjugate gradient method for solving the non-LTE line radiation transfer problem

    Science.gov (United States)

    Paletou, F.; Anterrieu, E.

    2009-12-01

    This study concerns the fast and accurate solution of the line radiation transfer problem, under non-LTE conditions. We propose and evaluate an alternative iterative scheme to the classical ALI-Jacobi method, and to the more recently proposed Gauss-Seidel and successive over-relaxation (GS/SOR) schemes. Our study is indeed based on applying a preconditioned bi-conjugate gradient method (BiCG-P). Standard tests, in 1D plane parallel geometry and in the frame of the two-level atom model with monochromatic scattering are discussed. Rates of convergence between the previously mentioned iterative schemes are compared, as are their respective timing properties. The smoothing capability of the BiCG-P method is also demonstrated.

  18. A License Plate Locating Method Based on Tophat-bothat Changing and Line Scanning

    International Nuclear Information System (INIS)

    Hou, P G; Zhao, J; Liu, M

    2006-01-01

    The automatic license plate recognition is an important technique to obtain traffic information, it mixes computer vision, image processing techniques and pattern recognition techniques, it is an important technique in intelligent traffic system. In a vehicle license plate recognition system, plate region detection is the key step before the final recognition. This article introduces the whole process of plate region detection. Top - hat and bot - hat transformation are used for enhancing the image contrast in this paper, and wavelet threshold method is used as image filter, and a improved line scanning is used for plate region detection at last. This method has strong practicability. The experimental results demonstrate that the method introduced in this paper is effective

  19. A Straight-Line Method for Analyzing Residual Drawdowns at an Observation Well

    Directory of Open Access Journals (Sweden)

    Mesut Çimen

    2015-01-01

    Full Text Available Determination of the hydraulic parameters (transmissivity and storage coefficients of a confined aquifer is important for effective groundwater resources. For this purpose, the residual drawdowns have been in use to estimate the aquifer parameters by the classical Theis recovery method. The proposed method of this paper depends on a straight-line through the field data and it helps to calculate the parameters quickly without any need for long-term pumping data. It is based on the expansion series of the Theis well function by consideration of three terms, and this approach is valid for the dimensionless time factor u′=S′r2/4Tt′≤0.2. The method can be applied reliably to extensive and homogeneous confined aquifers resulting in different storage coefficients during the pumping and recovery periods S≠S′. It presents a strength methodology for the parameters decision making from the residual data in the groundwater field of civil engineering.

  20. Numerical method of lines for the relaxational dynamics of nematic liquid crystals.

    Science.gov (United States)

    Bhattacharjee, A K; Menon, Gautam I; Adhikari, R

    2008-08-01

    We propose an efficient numerical scheme, based on the method of lines, for solving the Landau-de Gennes equations describing the relaxational dynamics of nematic liquid crystals. Our method is computationally easy to implement, balancing requirements of efficiency and accuracy. We benchmark our method through the study of the following problems: the isotropic-nematic interface, growth of nematic droplets in the isotropic phase, and the kinetics of coarsening following a quench into the nematic phase. Our results, obtained through solutions of the full coarse-grained equations of motion with no approximations, provide a stringent test of the de Gennes ansatz for the isotropic-nematic interface, illustrate the anisotropic character of droplets in the nucleation regime, and validate dynamical scaling in the coarsening regime.

  1. A Simplified Control Method for Tie-Line Power of DC Micro-Grid

    Directory of Open Access Journals (Sweden)

    Yanbo Che

    2018-04-01

    Full Text Available Compared with the AC micro-grid, the DC micro-grid has low energy loss and no issues of frequency stability, which makes it more accessible for distributed energy. Thus, the DC micro-grid has good potential for development. A variety of renewable energy is included in the DC micro-grid, which is easily affected by the environment, causing fluctuation of the DC voltage. For grid-connected DC micro-grid with droop control strategy, the tie-line power is affected by fluctuations in the DC voltage, which sets higher requirements for coordinated control of the DC micro-grid. This paper presents a simplified control method to maintain a constant tie-line power that is suitable for the DC micro-grid with the droop control strategy. By coordinating the designs of the droop control characteristics of generators, energy storage units and grid-connected inverter, a dead band is introduced to the droop control to improve the system performance. The tie-line power in the steady state is constant. When a large disturbance occurs, the AC power grid can provide power support to the micro-grid in time. The simulation example verifies the effectiveness of the proposed control strategy.

  2. Stochastic field-line wandering in magnetic turbulence with shear. II. Decorrelation trajectory method

    Science.gov (United States)

    Negrea, M.; Petrisor, I.; Shalchi, A.

    2017-11-01

    We study the diffusion of magnetic field lines in turbulence with magnetic shear. In the first part of the series, we developed a quasi-linear theory for this type of scenario. In this article, we employ the so-called DeCorrelation Trajectory method in order to compute the diffusion coefficients of stochastic magnetic field lines. The magnetic field configuration used here contains fluctuating terms which are described by the dimensionless functions bi(X, Y, Z), i = (x, y) and they are assumed to be Gaussian processes and are perpendicular with respect to the main magnetic field B0. Furthermore, there is also a z-component of the magnetic field depending on radial coordinate x (representing the gradient of the magnetic field) and a poloidal average component. We calculate the diffusion coefficients for magnetic field lines for different values of the magnetic Kubo number K, the dimensionless inhomogeneous magnetic parallel and perpendicular Kubo numbers KB∥, KB⊥ , as well as Ka v=bya vKB∥/KB⊥ .

  3. An On-Line Method for Thermal Diffusivity Detection of Thin Films Using Infrared Video

    Directory of Open Access Journals (Sweden)

    Dong Huilong

    2016-03-01

    Full Text Available A novel method for thermal diffusivity evolution of thin-film materials with pulsed Gaussian beam and infrared video is reported. Compared with common pulse methods performed in specialized labs, the proposed method implements a rapid on-line measurement without producing the off-centre detection error. Through mathematical deduction of the original heat conduction model, it is discovered that the area s, which is encircled by the maximum temperature curve rTMAX(θ, increases linearly over elapsed time. The thermal diffusivity is acquired from the growth rate of the area s. In this study, the off-centre detection error is avoided by performing the distance regularized level set evolution formulation. The area s was extracted from the binary images of temperature variation rate, without inducing errors from determination of the heat source centre. Thermal diffusivities of three materials, 304 stainless steel, titanium, and zirconium have been measured with the established on-line detection system, and the measurement errors are: −2.26%, −1.07%, and 1.61% respectively.

  4. Fault location method for unexposed gas trunk line insulation at stray current constant effect area

    Science.gov (United States)

    Tsenev, A. N.; Nosov, V. V.; Akimova, E. V.

    2017-10-01

    For the purpose of gas trunk lines safe operation, two types of pipe wall metal anticorrosion protection are generally used - the passive (insulation coating) protection and the active (electrochemical) protection. In the process of a pipeline long-term operation, its insulation is subject to wear and damage. Electrochemical protection means of a certain potential value prevent metal dissolution in the soil. When insulation wear and tear attains a level of insufficiency of the protection potential value, the insulating coating needs repair which is a labor-consuming procedure. To reduce the risk of such situation, it is necessary to make inspection rounds to monitor the condition of pipe insulation. A method for pipeline insulation coating unexposed fault location based on Pearson method is considered, wherein a working cathodic protection station signal of 100 Hz frequency is used, which makes installation of a generator unnecessary, and also a specific generator signal of 1 kHz frequency is used at high noise immunity and sensitivity of the instrument complex. This method enables detection and sizing of unexposed pipeline defects within the zones of earth current permanent action. High noise immunity of selective indicators allows for operation in proximity to 110 kV, 220 kV, and 500 kV power transmission lines in action.

  5. A multi attribute decision making method for selection of optimal assembly line

    Directory of Open Access Journals (Sweden)

    B. Vijaya Ramnath

    2011-01-01

    Full Text Available With globalization, sweeping technological development, and increasing competition, customers are placing greater demands on manufacturers to increase quality, flexibility, on time delivery of product and less cost. Therefore, manufacturers must develop and maintain a high degree of coherence among competitive priorities, order winning criteria and improvement activities. Thus, the production managers are making an attempt to transform their organization by adopting familiar and beneficial management philosophies like cellular manufacturing (CM, lean manufacturing (LM, green manufacturing (GM, total quality management (TQM, agile manufacturing (AM, and just in time manufacturing (JIT. The main objective of this paper is to propose an optimal assembly method for an engine manufacturer’s assembly line in India. Currently, the Indian manufacturer is following traditional assembly method where the raw materials for assembly are kept along the sideways of conveyor line. It consumes more floor space, more work in process inventory, more operator's walking time and more operator's walking distance per day. In order to reduce the above mentioned wastes, lean kitting assembly is suggested by some managers. Another group of managers suggest JIT assembly as it consumes very less inventory cost compared to other types of assembly processes. Hence, a Multi-attribute decision making model namely analytical hierarchy process (AHP is applied to analyse the alternative assembly methods based on various important factors.

  6. Investigation of the stochastic subspace identification method for on-line wind turbine tower monitoring

    Science.gov (United States)

    Dai, Kaoshan; Wang, Ying; Lu, Wensheng; Ren, Xiaosong; Huang, Zhenhua

    2017-04-01

    Structural health monitoring (SHM) of wind turbines has been applied in the wind energy industry to obtain their real-time vibration parameters and to ensure their optimum performance. For SHM, the accuracy of its results and the efficiency of its measurement methodology and data processing algorithm are the two major concerns. Selection of proper measurement parameters could improve such accuracy and efficiency. The Stochastic Subspace Identification (SSI) is a widely used data processing algorithm for SHM. This research discussed the accuracy and efficiency of SHM using SSI method to identify vibration parameters of on-line wind turbine towers. Proper measurement parameters, such as optimum measurement duration, are recommended.

  7. A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available In this paper, we address the Wireless Sensor Network (WSN deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values.

  8. A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events.

    Science.gov (United States)

    Aitsaadi, Nadjib; Achir, Nadjib; Boussetta, Khaled; Pujolle, Guy

    2009-01-01

    In this paper, we address the Wireless Sensor Network (WSN) deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values.

  9. Searching for beyond the Standard Model physics using direct and indirect methods at LHCb

    CERN Document Server

    Hall, Samuel C P; Golutvin, Andrey

    It is known that the Standard Model of particle physics is incomplete in its description of nature at a fundamental level. For example, the Standard Model can neither incorporate dark matter nor explain the matter dominated nature of the Universe. This thesis presents three analyses undertaken using data collected by the LHCb detector. Each analysis searches for indications of physics beyond the Standard Model in dierent decays of B mesons, using dierent techniques. Notably, two analyses look for indications of new physics using indirect methods, and one uses a direct approach. The rst analysis shows evidence for the rare decay $B^{+} \\rightarrow D^{+}_{s}\\phi$ with greater than 3 $\\sigma$ signicance; this also constitutes the rst evidence for a fullyhadronic annihilation-type decay of a $B^{+}$ meson. A measurement of the branching fraction of the decay $B^{+} \\rightarrow D^{+}_{s}\\phi$ is seen to be higher than, but still compatible with, Standard Model predictions. The CP-asymmetry of the decay is also ...

  10. Minimization of municipal solid waste transportation route in West Jakarta using Tabu Search method

    Science.gov (United States)

    Chaerul, M.; Mulananda, A. M.

    2018-04-01

    Indonesia still adopts the concept of collect-haul-dispose for municipal solid waste handling and it leads to the queue of the waste trucks at final disposal site (TPA). The study aims to minimize the total distance of waste transportation system by applying a Transshipment model. In this case, analogous of transshipment point is a compaction facility (SPA). Small capacity of trucks collects the waste from waste temporary collection points (TPS) to the compaction facility which located near the waste generator. After compacted, the waste is transported using big capacity of trucks to the final disposal site which is located far away from city. Problem related with the waste transportation can be solved using Vehicle Routing Problem (VRP). In this study, the shortest distance of route from truck pool to TPS, TPS to SPA, and SPA to TPA was determined by using meta-heuristic methods, namely Tabu Search 2 Phases. TPS studied is the container type with total 43 units throughout the West Jakarta City with 38 units of Armroll truck with capacity of 10 m3 each. The result determines the assignment of each truck from the pool to the selected TPS, SPA and TPA with the total minimum distance of 2,675.3 KM. The minimum distance causing the total cost for waste transportation to be spent by the government also becomes minimal.

  11. An ME-PC Enhanced HDMR Method for Efficient Statistical Analysis of Multiconductor Transmission Line Networks

    KAUST Repository

    Yucel, Abdulkadir C.

    2015-05-05

    An efficient method for statistically characterizing multiconductor transmission line (MTL) networks subject to a large number of manufacturing uncertainties is presented. The proposed method achieves its efficiency by leveraging a high-dimensional model representation (HDMR) technique that approximates observables (quantities of interest in MTL networks, such as voltages/currents on mission-critical circuits) in terms of iteratively constructed component functions of only the most significant random variables (parameters that characterize the uncertainties in MTL networks, such as conductor locations and widths, and lumped element values). The efficiency of the proposed scheme is further increased using a multielement probabilistic collocation (ME-PC) method to compute the component functions of the HDMR. The ME-PC method makes use of generalized polynomial chaos (gPC) expansions to approximate the component functions, where the expansion coefficients are expressed in terms of integrals of the observable over the random domain. These integrals are numerically evaluated and the observable values at the quadrature/collocation points are computed using a fast deterministic simulator. The proposed method is capable of producing accurate statistical information pertinent to an observable that is rapidly varying across a high-dimensional random domain at a computational cost that is significantly lower than that of gPC or Monte Carlo methods. The applicability, efficiency, and accuracy of the method are demonstrated via statistical characterization of frequency-domain voltages in parallel wire, interconnect, and antenna corporate feed networks.

  12. Reverse screening methods to search for the protein targets of chemopreventive compounds

    Science.gov (United States)

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-05-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and

  13. Pathway Detection from Protein Interaction Networks and Gene Expression Data Using Color-Coding Methods and A* Search Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2012-01-01

    Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.

  14. Application of pattern search method to power system security constrained economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Al-Othman, A.K.; El-Naggar, K.M.

    2008-01-01

    Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)

  15. Searching for Innovations and Methods of Using the Cultural Heritage on the Example of Upper Silesia

    Science.gov (United States)

    Wagner, Tomasz

    2017-10-01

    The basic subject of this paper is historical and cultural heritage of some parts of Upper Silesia, bind by common history and similar problems at present days. The paper presents some selected historical phenomena that have influenced contemporary space, mentioned above, and contemporary issues of heritage protection in Upper Silesia. The Silesian architecture interpretation, since 1989, is strongly covered with some ideological and national ideas. The last 25 years are the next level of development which contains rapidly transformation of the space what is caused by another economical transformations. In this period, we can observe landscape transformations, liquidation of objects and historical structures, loos of regional features, spontaneous adaptation processes of objects and many methods of implementation forms of protection, and using of cultural resources. Some upheaval linked to the state borders changes, system, economy and ethnic transformation caused that former Upper Silesia border area focuses phenomena that exists in some other similar European areas which are abutments of cultures and traditions. The latest period in the history of Upper Silesia gives us time to reflect the character of changes in architecture and city planning of the area and appraisal of efficiency these practices which are connected to cultural heritage perseveration. The phenomena of the last decades are: decrement of regional features, elimination of objects, which were a key feature of the regional cultural heritage, deformation of these forms that were shaped in the history and some trials of using these elements of cultural heritage, which are widely recognized as cultural values. In this situation, it is important to seek creative solutions that will neutralize bad processes resulting from bad law and practice. The most important phenomena of temporary space is searching of innovative fields and methods and use of cultural resources. An important part of the article is

  16. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  17. Assessment of methods for computing the closest point projection, penetration, and gap functions in contact searching problems

    Czech Academy of Sciences Publication Activity Database

    Kopačka, Ján; Gabriel, Dušan; Plešek, Jiří; Ulbin, M.

    2016-01-01

    Roč. 105, č. 11 (2016), s. 803-833 ISSN 0029-5981 R&D Projects: GA ČR(CZ) GAP101/12/2315; GA MŠk(CZ) ME10114 Institutional support: RVO:61388998 Keywords : closest point projection * local contact search * quadratic elements * Newtons methods * geometric iteration methods * simplex method Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.162, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/nme.4994/abstract

  18. GROUND SUBSIDENCE ALONG SHANGHAI METRO LINE 6 BY PS-InSAR METHOD

    Directory of Open Access Journals (Sweden)

    J. Wu

    2018-04-01

    Full Text Available With the rapid development of urban economy, convenient, safe, and efficient urban rail transit has become the preferred method for people to travel. In order to ensure the safety and sustainable development of urban rail transit, the PS-InSAR technology with millimeter deformation measurement accuracy has been widely applied to monitor the deformation of urban rail transit. In this paper, 32 scenes of COSMO-SkyMed descending images and 23 scenes of Envisat ASAR images covering the Shanghai Metro Line 6 acquired from 2008 to 2010 are used to estimate the average deformation rate along line-of-sight (LOS direction by PS-InSAR method. The experimental results show that there are two main subsidence areas along the Shanghai Metro Line 6, which are located between Wuzhou Avenue Station to Wulian Road Station and West Gaoke Road Station to Gaoqing Road Station. Between Wuzhou Avenue Station and Wulian Road Station, the maximum displacement rate in the vertical direction of COSMO-SkyMed images is −9.92 mm/year, and the maximum displacement rate in the vertical direction of Envisat ASAR images is −8.53 mm/year. From the West Gaoke Road Station to the Gaoqing Road Station, the maximum displacement rate in the vertical direction of COSMO-SkyMed images is −15.53 mm/year, and the maximum displacement rate in the vertical direction of Envisat ASAR images is −17.9 mm/year. The results show that the ground deformation rates obtained by two SAR platforms with different wavelengths, different sensors and different incident angles have good consistence with each other, and also that of spirit leveling.

  19. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  20. Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method

    OpenAIRE

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo

    2016-01-01

    Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack...

  1. An automated and efficient conformation search of L-cysteine and L,L-cystine using the scaled hypersphere search method

    Science.gov (United States)

    Kishimoto, Naoki; Waizumi, Hiroki

    2017-10-01

    Stable conformers of L-cysteine and L,L-cystine were explored using an automated and efficient conformational searching method. The Gibbs energies of the stable conformers of L-cysteine and L,L-cystine were calculated with G4 and MP2 methods, respectively, at 450, 298.15, and 150 K. By assuming thermodynamic equilibrium and the barrier energies for the conformational isomerization pathways, the estimated ratios of the stable conformers of L-cysteine were compared with those determined by microwave spectroscopy in a previous study. Equilibrium structures of 1:1 and 2:1 cystine-Fe complexes were also calculated, and the energy of insertion of Fe into the disulfide bond was obtained.

  2. Method and apparatus for magnetic resonance imaging and spectroscopy using microstrip transmission line coils

    Science.gov (United States)

    Zhang, Xiaoliang; Ugurbil, Kamil; Chen, Wei

    2006-04-04

    Apparatus and method for MRI imaging using a coil constructed of microstrip transmission line (MTL coil) are disclosed. In one method, a target is positioned to be imaged within the field of a main magnetic field of a magnet resonance imaging (MRI) system, a MTL coil is positioned proximate the target, and a MRI image is obtained using the main magnet and the MTL coil. In another embodiment, the MRI coil is used for spectroscopy. MRI imaging and spectroscopy coils are formed using microstrip transmission line. These MTL coils have the advantageous property of good performance while occupying a relatively small space, thus allowing MTL coils to be used inside restricted areas more easily than some other prior art coils. In addition, the MTL coils are relatively simple to construct of inexpensive components and thus relatively inexpensive compared to other designs. Further, the MTL coils of the present invention can be readily formed in a wide variety of coil configurations, and used in a wide variety of ways. Further, while the MTL coils of the present invention work well at high field strengths and frequencies, they also work at low frequencies and in low field strengths as well.

  3. Decentralized cooperative unmanned aerial vehicles conflict resolution by neural network-based tree search method

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2016-09-01

    Full Text Available In this article, a tree search algorithm is proposed to find the near optimal conflict avoidance solutions for unmanned aerial vehicles. In the dynamic environment, the unmodeled elements, such as wind, would make UAVs deviate from nominal traces. It brings about difficulties for conflict detection and resolution. The back propagation neural networks are utilized to approximate the unmodeled dynamics of the environment. To satisfy the online planning requirement, the search length of the tree search algorithm would be limited. Therefore, the algorithm may not be able to reach the goal states in search process. The midterm reward function for assessing each node is devised, with consideration given to two factors, namely, the safe separation requirement and the mission of each unmanned aerial vehicle. The simulation examples and the comparisons with previous approaches are provided to illustrate the smooth and convincing behaviours of the proposed algorithm.

  4. A new extraction method of loess shoulder-line based on Marr-Hildreth operator and terrain mask.

    Directory of Open Access Journals (Sweden)

    Sheng Jiang

    Full Text Available Loess shoulder-lines are significant structural lines which divide the complicated loess landform into loess interfluves and gully-slope lands. Existing extraction algorithms for shoulder-lines mainly are based on local maximum of terrain features. These algorithms are sensitive to noise for complicated loess surface and the extraction parameters are difficult to be determined, making the extraction results usually inaccurate. This paper presents a new extraction approach for loess shoulder-lines, in which Marr-Hildreth edge operator is employed to construct initial shoulder-lines. Then the terrain mask for confining the boundary of shoulder-lines is proposed based on slope degree classification and morphology methods, avoiding interference from non-valley area and modify the initial loess shoulder-lines. A case study is conducted in Yijun located in the northern Shanxi Loess Plateau of China. The Digital Elevation Models with a grid size of 5 m is applied as original data. To obtain optimal scale parameters, the Euclidean Distance Offset Percentages between shoulder-lines is calculated by the Marr-Hildreth operator and the manual delineations. The experimental results show that the new method could achieve the highest extraction accuracy when σ = 5 in Gaussian smoothing. According to the accuracy assessment, the average extraction accuracy is about 88.5%, which indicates that the proposed method is applicable for the extraction of loess shoulder-lines in the loess hilly and gully areas.

  5. Climate change on the Colorado River: a method to search for robust management strategies

    Science.gov (United States)

    Keefe, R.; Fischbach, J. R.

    2010-12-01

    The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.

  6. Best, Useful and Objective Precisions for Information Retrieval of Three Search Methods in PubMed and iPubMed

    Directory of Open Access Journals (Sweden)

    Somayyeh Nadi Ravandi

    2016-10-01

    Full Text Available MEDLINE is one of the valuable sources of medical information on the Internet. Among the different open access sites of MEDLINE, PubMed is the best-known site. In 2010, iPubMed was established with an interaction-fuzzy search method for MEDLINE access. In the present work, we aimed to compare the precision of the retrieved sources (Best, Useful and Objective precision in the PubMed and iPubMed using two search methods (simple and MeSH search in PubMed and interaction-fuzzy method in iPubmed. During our semi-empirical study period, we held training workshops for 61 students of higher education to teach them Simple Search, MeSH Search, and Fuzzy-Interaction Search methods. Then, the precision of 305 searches for each method prepared by the students was calculated on the basis of Best precision, Useful precision, and Objective precision formulas. Analyses were done in SPSS version 11.5 using the Friedman and Wilcoxon Test, and three precisions obtained with the three precision formulas were studied for the three search methods. The mean precision of the interaction-fuzzy Search method was higher than that of the simple search and MeSH search for all three types of precision, i.e., Best precision, Useful precision, and Objective precision, and the Simple search method was in the next rank, and their mean precisions were significantly different (P < 0.001. The precision of the interaction-fuzzy search method in iPubmed was investigated for the first time. Also for the first time, three types of precision were evaluated in PubMed and iPubmed. The results showed that the Interaction-Fuzzy search method is more precise than using the natural language search (simple search and MeSH search, and users of this method found papers that were more related to their queries; even though search in Pubmed is useful, it is important that users apply new search methods to obtain the best results.

  7. Drifter technique: a new method to obtain metaphases in Hep-2 cell line cultures

    Directory of Open Access Journals (Sweden)

    Eleonidas Moura Lima

    2005-07-01

    Full Text Available The Hep-2 cell line is derived from laryngeal carcinoma cells and is often utilized as a model in carcinogenesis and mutagenesis tests. To evaluate the proliferative potential of this line, we developed a cytogenetic methodology (drifter technique to obtain metaphases from cells that loose cellular adhesion when they underwent mitosis in culture. By this procedure, 2000 cells were counted, resulting in a mitotic index (MI of 22.2%. Although this MI was not statistically different from the one obtained using either a classical cytogenetic method or a cell synchronization technique, the drifter technique has the advantage of not requiring the use of some reagents for the obtention of metaphases and also of diminishing the consumption of maintenance reagents for this cell line.A linhagem celular Hep-2 é formada por células de carcinoma da laringe e é muito utilizada em modelos de carcinogênese e mutagenêse. Para avaliar o potencial proliferativo desta linhagem, desenvolvemos uma metodologia citogenética (técnica do sobrenadante para obtenção de metáfases a partir de células que, ao entrarem em mitose, perdem adesão celular, ficando em suspensão no meio de cultura. Através deste procedimento, foram contadas 2000 células, correspondendo a um índice mitótico (IM de 22.2% . Apesar de o IM obtido por esta técnica não ter sido estatisticamente diferente do IM obtido por outras metodologias citogenéticas clássicas, a técnica do sobrenadante é vantajosa porque elimina o uso de alguns reagentes utilizados na obtenção de metáfases e também diminui o consumo de reagentes de manutenção desta linhagem.

  8. A methodology for on line fatigue life monitoring : rainflow cycle counting method

    International Nuclear Information System (INIS)

    Mukhopadhyay, N.K.; Dutta, B.K.; Kushwaha, H.S.

    1992-01-01

    Green's function technique is used in on line fatigue life monitoring to convert plant data to stress versus time data. This technique converts plant data most efficiently to stress versus time data. To compute the fatigue usage factor the actual number of cycles experienced by the component is to be found out from stress versus time data. Using material fatigue properties the fatigue usage factor is to be computed from the number of cycles. Generally the stress response is very irregular in nature. To convert an irregular stress history to stress frequency spectra rainflow cycle counting method is used. This method is proved to be superior to other counting methods and yields best fatigue estimates. A code has been developed which computes the number of cycles experienced by the component from stress time history using rainflow cycle counting method. This postprocessor also computes the accumulated fatigue usage factor from material fatigue properties. The present report describes the development of a code to compute fatigue usage factor using rainflow cycle counting technique and presents a real life case study. (author). 10 refs., 10 figs

  9. Simplified method to solve sound transmission through structures lined with elastic porous material.

    Science.gov (United States)

    Lee, J H; Kim, J

    2001-11-01

    An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.

  10. WINDING METHOD SELECTION FOR TECHNICAL IMPLEMENTATION OF FIBER OPTIC COMMUNICATION LINE FOR HIGH-SPEED OBJECT

    Directory of Open Access Journals (Sweden)

    Vyacheslav A. Loparev

    2017-07-01

    Full Text Available The paper deals with fiber-optical cable winding methods for realization of fiber-optic communication line with high-speed object. We consider possible options of coils for optical cable winding providing mobility of one of the cable ends on an object. It is shown that the choice of a winding process is caused by the need of ensuring the minimum deformation of fiber-optical micro cable in case of separation from a winding body. It is revealed that the minimum tension value and its unevenness are observed when reeling from coils with a rocket form. Design ratios for determination of winding parameters are given. It is shown that reduction of tension unevenness reduces the jumps of internal tension and probability of break and emergence of optical signal local attenuation. Decrease in internal stresses occurs due to the absence of overlapping of the coils of the underlying layers with the overlying ones. To confirm the operability and the possibility of constructive implementation of the selected winding scheme, experiments were carried out to perform rocket and other types of winding with the use of a specially designed machine model. It is shown that the application of line rocket winding enables to achieve stability when reeling a cable during the movement and excludes breaks. Attenuation of optical signal decreases due to the increase in the bend minimum radius. This phenomenon is explained by reduction of the internal stresses causing optical signal attenuation in the place of cable separation from the coil.

  11. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  12. The discussion of method for survey the radiosensitivity of human glioma cell line SHG-44

    International Nuclear Information System (INIS)

    Li Li; Xu Changshao; Zhou Juying; Xu Xiaoting; Luo Jialin

    2005-01-01

    Objective: To investigate if thiazolyl blue colorimetric assay (MTT) and cell counting kit-8 (CCK-8) can replace clone forming assay for survey the radiosensitivity of SHG-44. Methods; Three assays was applied to examine the growth inhibition of human glioma cell line SHG-44 in eight dose groups of 0, 1, 2, 3, 4, 6, 8 and 10 Gy, and statistical research was applied to analyze the correlation between survival fraction and various doses. Results: Dose was associated with survival fraction in these three assays at some range of irradiation doseage (dose≤3 Gy). If out of the range, the relation is poor. CCK-8 has no rather superiority than MTT. Conclusion: By now clone forming assay is still the 'gold standard'. In some cases, MTT and other assays can give us some reference, but these assays still can not replace clone forming assay. (authors)

  13. A high-sensitive and quantitative in-line monitoring method for transplutonium elements separation processes

    International Nuclear Information System (INIS)

    Zhu Rongbao; Wang Shiju; Xu Yingpu; Zhang Zengrui

    1986-04-01

    A high-sensitive monitoring device and a quantitative analys technigue for transplutonium elements separation processes are described. X-ray and low energy γ-ray are measured by means of a scintillation monitor with two NaI(Tl) thin crystals. The α spectra of the fluents of ion-exchange column is measured by means of Si(Au) surface barrier in-line monitor. The construction of the monitors, auxiliary electronics, investigation result for the α spectra character of thick source and the calibration method were described. The determination results for extracting process of 243 Am and 244 Cm by ion-exchange chromatography were given. The sensitivity of total adding amount for 243 Am using the 4π scintillation monitor is better than 0.1 μCi. The precision of 243 Am and 244 Cm concentration determination using Si(Au) monitor is +- 5%. The precision of the two metals contents in containers is about +- 10%

  14. Pollutant Dispersion Modeling in Natural Streams Using the Transmission Line Matrix Method

    Directory of Open Access Journals (Sweden)

    Safia Meddah

    2015-09-01

    Full Text Available Numerical modeling has become an indispensable tool for solving various physical problems. In this context, we present a model of pollutant dispersion in natural streams for the far field case where dispersion is considered longitudinal and one-dimensional in the flow direction. The Transmission Line Matrix (TLM, which has earned a reputation as powerful and efficient numerical method, is used. The presented one-dimensional TLM model requires a minimum input data and provides a significant gain in computing time. To validate our model, the results are compared with observations and experimental data from the river Severn (UK. The results show a good agreement with experimental data. The model can be used to predict the spatiotemporal evolution of a pollutant in natural streams for effective and rapid decision-making in a case of emergency, such as accidental discharges in a stream with a dynamic similar to that of the river Severn (UK.

  15. New Method to Identify Field Joint Coating Failures Based on MFL In-Line Inspection Signals

    Directory of Open Access Journals (Sweden)

    Lianshuang Dai

    2018-02-01

    Full Text Available Above ground indirect detections and random excavations that have applied the past years for buried long distance oil and gas pipelines can only identify some damaged coating locations. Hence, large number of field joint coating (FJC failures happen unconsciously until they lead to failures of the pipelines. Based on the analysis of magnetic flux leakage (MFL in-line inspection (ILI signals, combined with the statistical results of 414 excavations from two different pipeline sections, a new method to identify the failed FJC is established. Though it can only identify FJC failures when there are signs of corrosion on pipe body, it is much more efficient and cost-saving. The concluded identification rule still needs more validations and improvements to be more applicable and accuracy.

  16. Simplified microstrip discontinuity modeling using the transmission line matrix method interfaced to microwave CAD

    Science.gov (United States)

    Thompson, James H.; Apel, Thomas R.

    1990-07-01

    A technique for modeling microstrip discontinuities is presented which is derived from the transmission line matrix method of solving three-dimensional electromagnetic problems. In this technique the microstrip patch under investigation is divided into an integer number of square and half-square (triangle) subsections. An equivalent lumped-element model is calculated for each subsection. These individual models are then interconnected as dictated by the geometry of the patch. The matrix of lumped elements is then solved using either of two microwave CAD software interfaces with each port properly defined. Closed-form expressions for the lumped-element representation of the individual subsections is presented and experimentally verified through the X-band frequency range. A model demonstrating the use of symmetry and block construction of a circuit element is discussed, along with computer program development and CAD software interface.

  17. Development of a model for on-line control of crystal growth by the AHP method

    Science.gov (United States)

    Gonik, M. A.; Lomokhova, A. V.; Gonik, M. M.; Kuliev, A. T.; Smirnov, A. D.

    2007-05-01

    The possibility to apply a simplified 2D model for heat transfer calculations in crystal growth by the axial heat close to phase interface (AHP) method is discussed in this paper. A comparison with global heat transfer calculations with the CGSim software was performed to confirm the accuracy of this model. The simplified model was shown to provide adequate results for the shape of the melt-crystal interface and temperature field in an opaque (Ge) and a transparent crystal (CsI:Tl). The model proposed is used for identification of the growth setup as a control object, for synthesis of a digital controller (PID controller at the present stage) and, finally, in on-line simulations of crystal growth control.

  18. Without derivatives or limits: from visual and geometrical points of view to algebraic methods for identifying tangent lines

    Science.gov (United States)

    Vivier, L.

    2013-07-01

    Usually, the tangent line is considered to be a calculus notion. However, it is also a graphical and an algebraic notion. The graphical frame, where our primary conceptions are conceived, could give rise to algebraic methods to obtain the tangent line to a curve. In this pre-calculus perspective, two methods are described and discussed according to their potential for secondary students and teacher training.

  19. A method for on-line reactivity monitoring in nuclear reactors

    International Nuclear Information System (INIS)

    Dulla, S.; Nervo, M.; Ravetto, P.

    2014-01-01

    Highlights: • The problem of the on-line monitoring of reactivity in a source-free nuclear reactor is considered. • A relationship between the system stable period and the power, its derivative and its integral is derived. • The reactivity can be reconstructed at each time instant from the measured power-related quantities. • A study on the sensitivity of the reactivity to the uncertainty on the values of the integral parameters is performed. • The spatial effects are investigated by applying the method to the interpretation of flux signals. - Abstract: In the present work the problem of the on-line monitoring of the reactivity in a source-free nuclear reactor is considered. The method is based on the classic point kinetic model of reactor physics. A relationship between the instantaneous value of the system stable period and the values of the neutron flux amplitude (or the power), of its derivative and of the integral convolution term determining the instantaneous value of the effective delayed neutron concentration is derived. The reactivity can then be evaluated through the application of the inhour equation, assuming the effective delayed neutron fraction and prompt generation time are known from independent measurements. Since the power related quantities can be assumed to be experimental observables at each instant, the reactivity can be easily reconstructed. The method is tested at first through the interpretation of power histories simulated by the solution of the point kinetic equations; the effect of the time interval between power detections on the accuracy is studied, proving the excellent performance of the procedure. The work includes also a study on the sensitivity of the reactivity forecast to the uncertainty on the values of the effective delayed neutron fraction and prompt generation time. The spatial effects are investigated by applying the method to the interpretation of flux evolution histories generated by a numerical code solving

  20. Theodolite Polar measurements system and definition of the grid-lines method

    Directory of Open Access Journals (Sweden)

    Andréa de Seixas

    2004-12-01

    Full Text Available The requirements of construction quality, mainly in the car and airplane industries, accelerate the development of new 3D-Measurement Systems and Measurement Processes that make possible the automatic object recording and it’s post-processing on the basis, for example, on deformations. The geometrical reconstruction of objects or surface requires a minimal number of points, which abstracts and will be fulfill through interpolation its exact form and quality of the object in each case. The applications of the laser for the active signalization of a point object in combination with the directional measurement make possible in such way the determination of objects or surfaces, including also, places where the use of artificial targets is dangerous or impossible. This work describes the development of such measurement system based on two measurement robots or a reflector-free measuring tachymeter. The system is capable of reaching the intersections points of a grid-line that is defined in an appropriate coordinate system. The aim of this paper is to present the development of measurement methods that can reconstruct unknown three-dimensional and not signalized objects. The existing deformation-measurement, based on Pointer Theodolite and a Video Theodolite Measurement System and the other reflector-free Tachymeter Measurement System in context with the problematic analysis of deformation will be presented. The grid-lines Methods appear a solution and stand as new alternative for the geometrical reconstruction of the object surfaces. Its definition and preparations in a suitable coordinate system are discussed in detail.

  1. An optimization based method for line planning to minimize travel time

    DEFF Research Database (Denmark)

    Bull, Simon Henry; Lusby, Richard Martin; Larsen, Jesper

    2015-01-01

    The line planning problem is to select a number of lines from a potential pool which provides sufficient passenger capacity and meets operational requirements, with some objective measure of solution line quality. We model the problem of minimizing the average passenger system time, including...

  2. On-line Monitoring Device for High-voltage Switch Cabinet Partial Discharge Based on Pulse Current Method

    Science.gov (United States)

    Y Tao, S.; Zhang, X. Z.; Cai, H. W.; Li, P.; Feng, Y.; Zhang, T. C.; Li, J.; Wang, W. S.; Zhang, X. K.

    2017-12-01

    The pulse current method for partial discharge detection is generally applied in type testing and other off-line tests of electrical equipment at delivery. After intensive analysis of the present situation and existing problems of partial discharge detection in switch cabinets, this paper designed the circuit principle and signal extraction method for partial discharge on-line detection based on a high-voltage presence indicating systems (VPIS), established a high voltage switch cabinet partial discharge on-line detection circuit based on the pulse current method, developed background software integrated with real-time monitoring, judging and analyzing functions, carried out a real discharge simulation test on a real-type partial discharge defect simulation platform of a 10KV switch cabinet, and verified the sensitivity and validity of the high-voltage switch cabinet partial discharge on-line monitoring device based on the pulse current method. The study presented in this paper is of great significance for switch cabinet maintenance and theoretical study on pulse current method on-line detection, and has provided a good implementation method for partial discharge on-line monitoring devices for 10KV distribution network equipment.

  3. New Keypoint Matching Method Using Local Convolutional Features for Power Transmission Line Icing Monitoring

    Directory of Open Access Journals (Sweden)

    Qiangliang Guo

    2018-02-01

    Full Text Available Power transmission line icing (PTLI problems, which cause tremendous damage to the power grids, has drawn much attention. Existing three-dimensional measurement methods based on binocular stereo vision was recently introduced to measure the ice thickness in PTLI, but failed to meet requirements of practical applications due to inefficient keypoint matching in the complex PTLI scene. In this paper, a new keypoint matching method is proposed based on the local multi-layer convolutional neural network (CNN features, termed Local Convolutional Features (LCFs. LCFs are deployed to extract more discriminative features than the conventional CNNs. Particularly in LCFs, a multi-layer features fusion scheme is exploited to boost the matching performance. Together with a location constraint method, the correspondence of neighboring keypoints is further refined. Our approach achieves 1.5%, 5.3%, 13.1%, 27.3% improvement in the average matching precision compared with SIFT, SURF, ORB and MatchNet on the public Middlebury dataset, and the measurement accuracy of ice thickness can reach 90.9% compared with manual measurement on the collected PTLI dataset.

  4. The unified method: I. Nonlinearizable problems on the half-line

    International Nuclear Information System (INIS)

    Fokas, A S; Lenells, J

    2012-01-01

    Boundary value problems for integrable nonlinear evolution PDEs formulated on the half-line can be analyzed by the unified method introduced by one of the authors and used extensively in the literature. The implementation of this general method to this particular class of problems yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem formulated in the complex k-plane (the Fourier plane), which has a jump matrix with explicit (x, t)-dependence involving four scalar functions of k, called the spectral functions. Two of these functions depend on the initial data, whereas the other two depend on all boundary values. The most difficult step of the new method is the characterization of the latter two spectral functions in terms of the given initial and boundary data, i.e. the elimination of the unknown boundary values. For certain boundary conditions, called linearizable, this can be achieved simply using algebraic manipulations. Here, we present an effective characterization of the spectral functions in terms of the given initial and boundary data for the general case of non-linearizable boundary conditions. This characterization is based on the analysis of the so-called global relation, on the analysis of the equations obtained from the global relation via certain transformations leaving the dispersion relation of the associated linearized PDE invariant and on the computation of the large k asymptotics of the eigenfunctions defining the relevant spectral functions. (paper)

  5. Application of laser ultrasonic method for on-line monitoring of friction stir spot welding process.

    Science.gov (United States)

    Zhang, Kuanshuang; Zhou, Zhenggan; Zhou, Jianghua

    2015-09-01

    Application of a laser ultrasonic method is developed for on-line monitoring of the friction stir spot welding (FSSW) process. Based on the technology of FSSW, laser-generated ultrasonic waves in a good weld and nonweld area are simulated by a finite element method. The reflected and transmitted waves are analyzed to disclose the properties of the welded interface. The noncontact-laser ultrasonic-inspection system was established to verify the numerical results. The reflected waves in the good-weld and nonweld area can be distinguished by time-of-flight. The transmitted waves evidently attenuate in the nonweld area in contrast to signal amplitude in the good weld area because of interfacial impedance difference. Laser ultrasonic C-scan images can sufficiently evaluate the intrinsic character of the weld area in comparison with traditional water-immersion ultrasonic testing results. The research results confirm that laser ultrasonics would be an effective method to realize the characterization of FSSW defects.

  6. Multi-Agent Based Beam Search for Real-Time Production Scheduling and Control Method, Software and Industrial Application

    CERN Document Server

    Kang, Shu Gang

    2013-01-01

    The Multi-Agent Based Beam Search (MABBS) method systematically integrates four major requirements of manufacturing production - representation capability, solution quality, computation efficiency, and implementation difficulty - within a unified framework to deal with the many challenges of complex real-world production planning and scheduling problems. Multi-agent Based Beam Search for Real-time Production Scheduling and Control introduces this method, together with its software implementation and industrial applications.  This book connects academic research with industrial practice, and develops a practical solution to production planning and scheduling problems. To simplify implementation, a reusable software platform is developed to build the MABBS method into a generic computation engine.  This engine is integrated with a script language, called the Embedded Extensible Application Script Language (EXASL), to provide a flexible and straightforward approach to representing complex real-world problems. ...

  7. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  8. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    International Nuclear Information System (INIS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-01-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution

  9. Matrix Methods for Solving Hartree-Fock Equations in Atomic Structure Calculations and Line Broadening

    Directory of Open Access Journals (Sweden)

    Thomas Gomez

    2018-04-01

    Full Text Available Atomic structure of N-electron atoms is often determined by solving the Hartree-Fock equations, which are a set of integro-differential equations. The integral part of the Hartree-Fock equations treats electron exchange, but the Hartree-Fock equations are not often treated as an integro-differential equation. The exchange term is often approximated as an inhomogeneous or an effective potential so that the Hartree-Fock equations become a set of ordinary differential equations (which can be solved using the usual shooting methods. Because the Hartree-Fock equations are an iterative-refinement method, the inhomogeneous term relies on the previous guess of the wavefunction. In addition, there are numerical complications associated with solving inhomogeneous differential equations. This work uses matrix methods to solve the Hartree-Fock equations as an integro-differential equation. It is well known that a derivative operator can be expressed as a matrix made of finite-difference coefficients; energy eigenvalues and eigenvectors can be obtained by using linear-algebra packages. The integral (exchange part of the Hartree-Fock equation can be approximated as a sum and written as a matrix. The Hartree-Fock equations can be solved as a matrix that is the sum of the differential and integral matrices. We compare calculations using this method against experiment and standard atomic structure calculations. This matrix method can also be used to solve for free-electron wavefunctions, thus improving how the atoms and free electrons interact. This technique is important for spectral line broadening in two ways: it improves the atomic structure calculations, and it improves the motion of the plasma electrons that collide with the atom.

  10. Elliptical broken line method for calculating capillary density in nailfold capillaroscopy: Proposal and evaluation.

    Science.gov (United States)

    Karbalaie, Abdolamir; Abtahi, Farhad; Fatemi, Alimohammad; Etehadtavakol, Mahnaz; Emrani, Zahra; Erlandsson, Björn-Erik

    2017-09-01

    Nailfold capillaroscopy is a practical method for identifying and obtaining morphological changes in capillaries which might reveal relevant information about diseases and health. Capillaroscopy is harmless, and seems simple and repeatable. However, there is lack of established guidelines and instructions for acquisition as well as the interpretation of the obtained images; which might lead to various ambiguities. In addition, assessment and interpretation of the acquired images are very subjective. In an attempt to overcome some of these problems, in this study a new modified technique for assessment of nailfold capillary density is introduced. The new method is named elliptic broken line (EBL) which is an extension of the two previously known methods by defining clear criteria for finding the apex of capillaries in different scenarios by using a fitted elliptic. A graphical user interface (GUI) is developed for pre-processing, manual assessment of capillary apexes and automatic correction of selected apexes based on 90° rule. Intra- and inter-observer reliability of EBL and corrected EBL is evaluated in this study. Four independent observers familiar with capillaroscopy performed the assessment for 200 nailfold videocapillaroscopy images, form healthy subject and systemic lupus erythematosus patients, in two different sessions. The results show elevation from moderate (ICC=0.691) and good (ICC=0.753) agreements to good (ICC=0.750) and good (ICC=0.801) for intra- and inter-observer reliability after automatic correction of EBL. This clearly shows the potential of this method to improve the reliability and repeatability of assessment which motivates us for further development of automatic tool for EBL method. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Evaluating unsupervised methods to size and classify suspended particles using digital in-line holography

    Science.gov (United States)

    Davies, Emlyn J.; Buscombe, Daniel D.; Graham, George W.; Nimmo-Smith, W. Alex M.

    2015-01-01

    Substantial information can be gained from digital in-line holography of marine particles, eliminating depth-of-field and focusing errors associated with standard lens-based imaging methods. However, for the technique to reach its full potential in oceanographic research, fully unsupervised (automated) methods are required for focusing, segmentation, sizing and classification of particles. These computational challenges are the subject of this paper, in which we draw upon data collected using a variety of holographic systems developed at Plymouth University, UK, from a significant range of particle types, sizes and shapes. A new method for noise reduction in reconstructed planes is found to be successful in aiding particle segmentation and sizing. The performance of an automated routine for deriving particle characteristics (and subsequent size distributions) is evaluated against equivalent size metrics obtained by a trained operative measuring grain axes on screen. The unsupervised method is found to be reliable, despite some errors resulting from over-segmentation of particles. A simple unsupervised particle classification system is developed, and is capable of successfully differentiating sand grains, bubbles and diatoms from within the surf-zone. Avoiding miscounting bubbles and biological particles as sand grains enables more accurate estimates of sand concentrations, and is especially important in deployments of particle monitoring instrumentation in aerated water. Perhaps the greatest potential for further development in the computational aspects of particle holography is in the area of unsupervised particle classification. The simple method proposed here provides a foundation upon which further development could lead to reliable identification of more complex particle populations, such as those containing phytoplankton, zooplankton, flocculated cohesive sediments and oil droplets.

  12. Evaluation of different continuous cell lines in the isolation of mumps virus by the shell vial method from clinical samples

    Science.gov (United States)

    Reina, J; Ballesteros, F; Mari, M; Munar, M

    2001-01-01

    Aims—To compare prospectively the efficacy of the Vero, LLC-MK2, MDCK, Hep-2, and MRC-5 cell lines in the isolation of the mumps virus from clinical samples by means of the shell vial method. Methods—During an epidemic outbreak of parotiditis 48 clinical samples (saliva swabs and CSF) were studied. Two vials of the Vero, LLC-MK2, MDCK, MRC-5, and Hep-2 cell lines were inoculated with 0.2 ml of the samples by the shell vial assay. The vials were incubated at 36°C for two and five days. The vials were then fixed with acetone at -20°C for 10 minutes and stained by a monoclonal antibody against mumps virus by means of an indirect immunofluorescence assay. Results—The mumps virus was isolated from 36 samples. The Vero and LLC-MK2 cell lines showed a 100% isolation capacity, MDCK showed 77.7%, MRC-5 showed 44.4%, and Hep-2 showed 22.2%. The Vero and LLC-MK2 lines were significantly different to the other cell lines (p 5 infectious foci) were 94.4% for Vero, 97.2% for LLC-MK2, 5.5% for MDCK, 5.5% for Hep-2, and 0% for MRC-5. Conclusions—The Vero and LLC-MK2 cell lines are equally efficient at two and five days incubation for the isolation of the mumps virus from clinical samples, and the use of the shell vial method considerably shortens the time of aetiological diagnosis with higher specificity. Key Words: mumps virus • Vero cell line • LLC-MK2 cell line • MDCK cell line • Hep-2 cell line • MRC-5 cell line • isolation • shell vial PMID:11729211

  13. A study on the advanced methods for on-line signal processing by using artificial intelligence in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Wan Joo

    1993-02-01

    In order to assist the operators at the transient states of a nuclear power plant, the automation of signal processing is needed. This study has the objective to process the signals from a nuclear power plant for the purpose of advising the operator. To meet this objective, in this study, two kinds of on-line signal processing system based on AI techniques are developed for the nuclear power plant application with on-line signals. First, an artificial neural network for signal prediction is developed for the adequate countermoves at transient states. The steam generator water level is adopted as the example and the outputs of a simulation program for the dynamics of steam generator combined with noises are used as the training patterns. For the training of the artificial neural network, the modified backpropagation algorithm is proposed for escaping quickly from local minima. The modified algorithm is different from the ordinary backpropagation algorithm in the aspect that the training rate coefficient is repeatedly adjusted randomly and taken when the training is improved. This trial has an effect to search for an adequate magnitude of a training rate coefficient. The comparison result shows that the modified algorithm enables the neural network to be trained more quickly. The simulation result shows that the outputs of the artificial neural network are not sensitive to noises. Using the artificial neural networks proposed in this thesis, the operators can predict the next status of a plant and can take actions to maintain the stability of plant. Second, the multi sensor integration system has been developed for the identification of transient states. The developed system is divided into two parts; pre-processors and a fusion part. An artificial neural network is adopted in the fusion part to include the knowledge about the identification and to make a decision of the transient state. The developed pre-processors play a role of classifying the trend types of

  14. Exploiting a Simple Method for the Determination of Manganese in Polyethylene Lined Tubing for Petroleum and Natural Gas Industries

    Directory of Open Access Journals (Sweden)

    Shao Xiaodong

    2018-01-01

    Full Text Available The polyethylene lined tubing is the key to enabling the industry to meet some of the energy security challenges that nations face today. It is well known that manganese is an important element in polyethylene lined tubing. In this paper, a simple spectrophotometric method was described for the determination of manganese in polyethylene lined tubing. The method was based on the oxidation-reduction reaction between ammonium persulfate and manganese(II producing manganese(VII in the presence of silver nitrate as a catalyst. The characteristic wavelength of maximum absorption of manganese(VII was obtained locating at 530 nm. Under the optimum reaction conditions the absorption value was proportional to the concentration of manganese in the range of 0.2%~1.9% (R2 = 0.9997, and the relative standard deviation was less than 3.0% (n=5. The proposed method was applied successfully to determine manganese in polyethylene lined tubing real samples.

  15. Reliability of a new method for measuring coronal trunk imbalance, the axis-line-angle technique.

    Science.gov (United States)

    Zhang, Rui-Fang; Liu, Kun; Wang, Xue; Liu, Qian; He, Jia-Wei; Wang, Xiang-Yang; Yan, Zhi-Han

    2015-12-01

    Accurate determination of the extent of trunk imbalance in the coronal plane plays a key role in an evaluation of patients with trunk imbalance, such as patients with adolescent idiopathic scoliosis. An established, widely used practice in evaluating trunk imbalance is to drop a plumb line from the C7 vertebra to a key reference axis, the central sacral vertical line (CSVL) in full-spine standing anterioposterior radiographs, and measuring the distance between them, the C7-CSVL. However, measuring the CSVL is subject to intraobserver differences, is error-prone, and is of poor reliability. Therefore, the development of a different way to measure trunk imbalance is needed. This study aimed to describe a new method to measure coronal trunk imbalance, the axis-line-angle technique (ALAT), which measures the angle at the intersection between the C7 plumb line and an axis line drawn from the vertebral centroid of the C7 to the middle of the superior border of the symphysis pubis, and to compare the reliability of the ALAT with that of the C7-CSVL. A prospective study at a university hospital was used. The patient sample consisted of sixty-nine consecutively enrolled men and women patients, aged 10-18 years, who had trunk imbalance defined as C7-CSVL longer than 20 mm on computed full-spine standing anterioposterior radiographs. Data were analyzed to determine the correlation between C7-CSVL and ALAT measurements and to determine intraobserver and interobserver reliabilities. Using a picture archiving and communication system, three radiologists independently evaluated trunk imbalance on the 69 computed radiographs by measuring the C7-CSVL and by measuring the angle determined by the ALAT. Data were analyzed to determine the correlations between the two measures of trunk imbalance, and to determine intraobserver and interobserver reliabilities of each of them. Overall results from the measurements by the C7-CSVL and the ALAT were significantly moderately correlated

  16. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  17. Development and application of an on-line tritium production rate measuring method

    International Nuclear Information System (INIS)

    Yamaguchi, Seiya

    1989-06-01

    A highly sensitive on-line method for measuring the tritium production rate (TPR) of 6 Li was developed using the response difference of 6 Li and 7 Li-glass scintillators in a mixed neutron-gamma radiation field. A fitting method for subtracting the pulse height spectrum of 7 Li-glass from that of 6 Li-glass was introduced. The contribution of competing reactions such as 6 Li (n, n 'd) 4 He was estimated by kinematical analyses. An absolute value of the 6 Li content was determined by a chemical analysis. The thermal flux perturbation due to 6 Li-glass of various thickness and 6 Li contents was evaluated by measurement in a thermal neutron field and calculation by the modified Skyrme theory. A Monte Carlo calculation of the self-shielding effect was also made. The dependence of the self-shielding on neutron energy was examined by this Monte Carlo code. The edge effect, i.e., distortion of the pulse height spectrum due to partial energy deposition of the alpha and/or the triton, was investigated by measurement in a thermal neutron field and by a Monte Carlo simulation that was based on the scintillation mechanism and considered Bragg absorption and the ratio of contributions to luminescence by the alpha and the triton. The dependence of the edge effect on neutron energy was examined by this Monte Carlo code. This method was applied to the measurement of TPR distributions in simulated fusion blanket assemblies bombarded by D-T neutrons. Absolute values of the TPR were obtained with an experimental error of 3∼6 %. The measured results were compared with those of conventional β-counting methods and good agreement was obtained. An optical fiber system, using miniature lithium-glass scintillators, was fabricated for purpose of microminiaturization of detector size and adaption to strong electromagnetic field. Applicability of this system to a D-T neutron field was demonstrated. (author)

  18. Aircraft engine sensor fault diagnostics using an on-line OBEM update method.

    Directory of Open Access Journals (Sweden)

    Xiaofeng Liu

    Full Text Available This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI system, in which a Hybrid Kalman Filter (HKF was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault.

  19. Superconducting coil manufacturing method for low current dc beam line magnets

    International Nuclear Information System (INIS)

    Satti, J.A.

    1977-01-01

    A method of manufacturing superconducting multipole coils for 40 to 50 kG dc beam line magnets with low current is described. Small coils were built and tested successfully to short sample characteristics. The coils did not train after the first cooldown. The coils are porous and well cooled to cope with mechanical instability and energy deposited in the coil from the beam particles. The coils are wound with insulated strand cable. The cable is shaped rectangularly for winding simplicity and good tolerances. After the coil is wound, the insulated strands are electrically connected in series. This reduces the operating current and, most important, improves the coil quench propagation due to heat conduction of one strand adjacent to the other. A well distributed quench allows the magnet energy to distribute more uniformly to the copper in the superconductor wire, giving self-protected coils. A one-meter long, 43 kG, 6-inch bore tube superconducting dipole is now being fabricated. The porous coil design and coil winding methods are discussed

  20. Fluctuating dynamics of nematic liquid crystals using the stochastic method of lines

    Science.gov (United States)

    Bhattacharjee, A. K.; Menon, Gautam I.; Adhikari, R.

    2010-07-01

    We construct Langevin equations describing the fluctuations of the tensor order parameter Qαβ in nematic liquid crystals by adding noise terms to time-dependent variational equations that follow from the Ginzburg-Landau-de Gennes free energy. The noise is required to preserve the symmetry and tracelessness of the tensor order parameter and must satisfy a fluctuation-dissipation relation at thermal equilibrium. We construct a noise with these properties in a basis of symmetric traceless matrices and show that the Langevin equations can be solved numerically in this basis using a stochastic version of the method of lines. The numerical method is validated by comparing equilibrium probability distributions, structure factors, and dynamic correlations obtained from these numerical solutions with analytic predictions. We demonstrate excellent agreement between numerics and theory. This methodology can be applied to the study of phenomena where fluctuations in both the magnitude and direction of nematic order are important, as for instance, in the nematic swarms which produce enhanced opalescence near the isotropic-nematic transition or the problem of nucleation of the nematic from the isotropic phase.

  1. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  2. Hybridization of Sensing Methods of the Search Domain and Adaptive Weighted Sum in the Pareto Approximation Problem

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP  sequences.

  3. Coherent search of continuous gravitational wave signals: extension of the 5-vectors method to a network of detectors

    International Nuclear Information System (INIS)

    Astone, P; Colla, A; Frasca, S; Palomba, C; D'Antonio, S

    2012-01-01

    We describe the extension to multiple datasets of a coherent method for the search of continuous gravitational wave signals, based on the computation of 5-vectors. In particular, we show how to coherently combine different datasets belonging to the same detector or to different detectors. In the latter case the coherent combination is the way to have the maximum increase in signal-to-noise ratio. If the datasets belong to the same detector the advantage comes mainly from the properties of a quantity called coherence which is helpful (in both cases, in fact) in rejecting false candidates. The method has been tested searching for simulated signals injected in Gaussian noise and the results of the simulations are discussed.

  4. A Simple Time Domain Collocation Method to Precisely Search for the Periodic Orbits of Satellite Relative Motion

    Directory of Open Access Journals (Sweden)

    Xiaokui Yue

    2014-01-01

    Full Text Available A numerical approach for obtaining periodic orbits of satellite relative motion is proposed, based on using the time domain collocation (TDC method to search for the periodic solutions of an exact J2 nonlinear relative model. The initial conditions for periodic relative orbits of the Clohessy-Wiltshire (C-W equations or Tschauner-Hempel (T-H equations can be refined with this approach to generate nearly bounded orbits. With these orbits, a method based on the least-squares principle is then proposed to generate projected closed orbit (PCO, which is a reference for the relative motion control. Numerical simulations reveal that the presented TDC searching scheme is effective and simple, and the projected closed orbit is very fuel saving.

  5. Methods and pitfalls in searching drug safety databases utilising the Medical Dictionary for Regulatory Activities (MedDRA).

    Science.gov (United States)

    Brown, Elliot G

    2003-01-01

    The Medical Dictionary for Regulatory Activities (MedDRA) is a unified standard terminology for recording and reporting adverse drug event data. Its introduction is widely seen as a significant improvement on the previous situation, where a multitude of terminologies of widely varying scope and quality were in use. However, there are some complexities that may cause difficulties, and these will form the focus for this paper. Two methods of searching MedDRA-coded databases are described: searching based on term selection from all of MedDRA and searching based on terms in the safety database. There are several potential traps for the unwary in safety searches. There may be multiple locations of relevant terms within a system organ class (SOC) and lack of recognition of appropriate group terms; the user may think that group terms are more inclusive than is the case. MedDRA may distribute terms relevant to one medical condition across several primary SOCs. If the database supports the MedDRA model, it is possible to perform multiaxial searching: while this may help find terms that might have been missed, it is still necessary to consider the entire contents of the SOCs to find all relevant terms and there are many instances of incomplete secondary linkages. It is important to adjust for multiaxiality if data are presented using primary and secondary locations. Other sources for errors in searching are non-intuitive placement and the selection of terms as preferred terms (PTs) that may not be widely recognised. Some MedDRA rules could also result in errors in data retrieval if the individual is unaware of these: in particular, the lack of multiaxial linkages for the Investigations SOC, Social circumstances SOC and Surgical and medical procedures SOC and the requirement that a PT may only be present under one High Level Term (HLT) and one High Level Group Term (HLGT) within any single SOC. Special Search Categories (collections of PTs assembled from various SOCs by

  6. Parallel metaheuristics in computational biology: an asynchronous cooperative enhanced scatter search method

    OpenAIRE

    Penas, David R.; González, Patricia; Egea, José A.; Banga, Julio R.; Doallo, Ramón

    2015-01-01

    Metaheuristics are gaining increased attention as efficient solvers for hard global optimization problems arising in bioinformatics and computational systems biology. Scatter Search (SS) is one of the recent outstanding algorithms in that class. However, its application to very hard problems, like those considering parameter estimation in dynamic models of systems biology, still results in excessive computation times. In order to reduce the computational cost of the SS and improve its success...

  7. NEW METHOD FOR REACHING CONSUMERS OVER THE INTERNET: "SEARCH ENGINE MARKETING”

    OpenAIRE

    Ergezer, Çağrı

    2018-01-01

    Internet has become a platform which reached millions of users momentarily with increased use, also become a place where people spent most of their time during the day by gaining consumer and potential customer ID in addition to just being ordinary Internet users. Search engines also have earned the distinction of being the preferred reference for users in the Internet sea which draws attention with usage rate and allowing you to easily reach the sought-after content where millions of content...

  8. A Comparison of Local Search Methods for the Multicriteria Police Districting Problem on Graph

    Directory of Open Access Journals (Sweden)

    F. Liberatore

    2016-01-01

    Full Text Available In the current economic climate, law enforcement agencies are facing resource shortages. The effective and efficient use of scarce resources is therefore of the utmost importance to provide a high standard public safety service. Optimization models specifically tailored to the necessity of police agencies can help to ameliorate their use. The Multicriteria Police Districting Problem (MC-PDP on a graph concerns the definition of sound patrolling sectors in a police district. The objective of this problem is to partition a graph into convex and continuous subsets, while ensuring efficiency and workload balance among the subsets. The model was originally formulated in collaboration with the Spanish National Police Corps. We propose for its solution three local search algorithms: a Simple Hill Climbing, a Steepest Descent Hill Climbing, and a Tabu Search. To improve their diversification capabilities, all the algorithms implement a multistart procedure, initialized by randomized greedy solutions. The algorithms are empirically tested on a case study on the Central District of Madrid. Our experiments show that the solutions identified by the novel Tabu Search outperform the other algorithms. Finally, research guidelines for future developments on the MC-PDP are given.

  9. MARVELS-1: A FACE-ON DOUBLE-LINED BINARY STAR MASQUERADING AS A RESONANT PLANETARY SYSTEM AND CONSIDERATION OF RARE FALSE POSITIVES IN RADIAL VELOCITY PLANET SEARCHES

    International Nuclear Information System (INIS)

    Wright, Jason T.; Roy, Arpita; Mahadevan, Suvrath; Wang, Sharon X.; Fleming, Scott W.; Ford, Eric B.; Payne, Matt; Lee, Brian L.; Ge, Jian; Wang, Ji; Crepp, Justin R.; Gaudi, B. Scott; Eastman, Jason; Pepper, Joshua; Cargile, Phillip; Stassun, Keivan G.; Ghezzi, Luan; González-Hernández, Jonay I.; Wisniewski, John; Dutra-Ferreira, Leticia

    2013-01-01

    We have analyzed new and previously published radial velocity (RV) observations of MARVELS-1, known to have an ostensibly substellar companion in a ∼6 day orbit. We find significant (∼100 m s –1 ) residuals to the best-fit model for the companion, and these residuals are naïvely consistent with an interior giant planet with a P = 1.965 days in a nearly perfect 3:1 period commensurability (|P b /P c – 3| –4 ). We have performed several tests for the reality of such a companion, including a dynamical analysis, a search for photometric variability, and a hunt for contaminating stellar spectra. We find many reasons to be critical of a planetary interpretation, including the fact that most of the three-body dynamical solutions are unstable. We find no evidence for transits, and no evidence of stellar photometric variability. We have discovered two apparent companions to MARVELS-1 with adaptive optics imaging at Keck; both are M dwarfs, one is likely bound, and the other is likely a foreground object. We explore false-alarm scenarios inspired by various curiosities in the data. Ultimately, a line profile and bisector analysis lead us to conclude that the ∼100 m s –1 residuals are an artifact of spectral contamination from a stellar companion contributing ∼15%-30% of the optical light in the system. We conclude that origin of this contamination is the previously detected RV companion to MARVELS-1, which is not, as previously reported, a brown dwarf, but in fact a G dwarf in a face-on orbit.

  10. Improved non-invasive method for aerosol particle charge measurement employing in-line digital holography

    Science.gov (United States)

    Tripathi, Anjan Kumar

    Electrically charged particles are found in a wide range of applications ranging from electrostatic powder coating, mineral processing, and powder handling to rain-producing cloud formation in atmospheric turbulent flows. In turbulent flows, particle dynamics is influenced by the electric force due to particle charge generation. Quantifying particle charges in such systems will help in better predicting and controlling particle clustering, relative motion, collision, and growth. However, there is a lack of noninvasive techniques to measure particle charges. Recently, a non-invasive method for particle charge measurement using in-line Digital Holographic Particle Tracking Velocimetry (DHPTV) technique was developed in our lab, where charged particles to be measured were introduced to a uniform electric field, and their movement towards the oppositely charged electrode was deemed proportional to the amount of charge on the particles (Fan Yang, 2014 [1]). However, inherent speckle noise associated with reconstructed images was not adequately removed and therefore particle tracking data was contaminated. Furthermore, particle charge calculation based on particle deflection velocity neglected the particle drag force and rebound effect of the highly charged particles from the electrodes. We improved upon the existing particle charge measurement method by: 1) hologram post processing, 2) taking drag force into account in charge calculation, 3) considering rebound effect. The improved method was first fine-tuned through a calibration experiment. The complete method was then applied to two different experiments, namely conduction charging and enclosed fan-driven turbulence chamber, to measure particle charges. In all three experiments conducted, the particle charge was found to obey non-central t-location scale family of distribution. It was also noted that the charge distribution was insensitive to the change in voltage applied between the electrodes. The range of voltage

  11. Strapdown Airborne Gravimetry Quality Assessment Method Based on Single Survey Line Data: A Study by SGA-WZ02 Gravimeter

    Science.gov (United States)

    Wu, Meiping; Cao, Juliang; Zhang, Kaidong; Cai, Shaokun; Yu, Ruihang

    2018-01-01

    Quality assessment is an important part in the strapdown airborne gravimetry. Root mean square error (RMSE) evaluation method is a classical way to evaluate the gravimetry quality, but classical evaluation methods are preconditioned by extra flight or reference data. Thus, a method, which is able to largely conquer the premises of classical quality assessment methods and can be used in single survey line, has been developed in this paper. According to theoretical analysis, the method chooses the stability of two horizontal attitude angles, horizontal specific force and vertical specific force as the determinants of quality assessment method. The actual data, collected by SGA-WZ02 from 13 flights 21 lines in certain survey, was used to build the model and elaborate the method. To substantiate the performance of the quality assessment model, the model is applied in extra repeat line flights from two surveys. Compared with internal RMSE, standard deviation of assessment residuals are 0.23 mGal and 0.16 mGal in two surveys, which shows that the quality assessment method is reliable and stricter. The extra flights are not necessary by specially arranging the route of flights. The method, summarized from SGA-WZ02, is a feasible approach to assess gravimetry quality using single line data and is also suitable for other strapdown gravimeters. PMID:29373535

  12. A simple method for controlling the line width of SASE X-ray FELs

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2010-04-01

    This paper describes a novel single-bunch self-seeding scheme for generating highly monochromatic X-rays from a baseline XFEL undulator. A self-seeded XFEL consists of two undulators with an X-ray monochromator located between them. Previous self-seeding schemes made use of a four-crystal fixed-exit monochromator in Bragg geometry. In such monochromator the X-ray pulse acquires a cm-long path delay, which must be compensated. For a single-bunch self-seeding scheme this requires a long electron beam bypass, implying modifications of the baseline undulator configuration. To avoid this problem, a double bunch self-seeding scheme based on a special photoinjector setup was recently proposed. At variance, here we propose a new time-domain method of monochromatization exploiting a single crystal in the transmission direction, thus avoiding the problem of extra-path delay for the X-ray pulse. The method can be realized using a temporal windowing technique, requiring a magnetic delay for the electron bunch only. When the incident X-ray beam satisfies the Bragg diffraction condition, multiple scattering takes place and the transmittance spectrum in the crystal exhibits an absorption resonance with a narrow linewidth. Then, the temporal waveform of the transmitted radiation pulse is characterized by a long monochromatic wake. The radiation power within this wake is much larger than the shot noise power. At the entrance of the second undulator, the monochromatic wake of the radiation pulse is combined with the delayed electron bunch, and amplified up to saturation level. The proposed setup is extremely simple and composed of as few as two simple elements. These are the crystal and the short magnetic chicane, which accomplishes three tasks by itself. It creates an offset for crystal installation, it removes the electron micro-bunching produced in the first undulator, and it acts as a delay line for temporal windowing. Using a single crystal installed within a short magnetic

  13. A Modified Generalized Laguerre-Gauss Collocation Method for Fractional Neutral Functional-Differential Equations on the Half-Line

    Directory of Open Access Journals (Sweden)

    Ali H. Bhrawy

    2014-01-01

    Full Text Available The modified generalized Laguerre-Gauss collocation (MGLC method is applied to obtain an approximate solution of fractional neutral functional-differential equations with proportional delays on the half-line. The proposed technique is based on modified generalized Laguerre polynomials and Gauss quadrature integration of such polynomials. The main advantage of the present method is to reduce the solution of fractional neutral functional-differential equations into a system of algebraic equations. Reasonable numerical results are achieved by choosing few modified generalized Laguerre-Gauss collocation points. Numerical results demonstrate the accuracy, efficiency, and versatility of the proposed method on the half-line.

  14. Generalized transmission line method to study the far-zone radiation of antennas under a multilayer structure

    CERN Document Server

    Wu, Xuan Hui

    2008-01-01

    This book gives a step-by-step presentation of a generalized transmission line method to study the far-zone radiation of antennas under a multilayer structure. Normally, a radiation problem requires a full wave analysis which may be time consuming. The beauty of the generalized transmission line method is that it transforms the radiation problem for a specific type of structure, say the multilayer structure excited by an antenna, into a circuit problem that can be efficiently analyzed. Using the Reciprocity Theorem and far-field approximation, the method computes the far-zone radiation due to

  15. Assessing the search for information on three Rs methods, and their subsequent implementation: a national survey among scientists in the Netherlands

    NARCIS (Netherlands)

    Luijk, J. van; Cuijpers, Y.M.; Vaart, L. van der; Leenaars, M; Ritskes-Hoitinga, M.

    2011-01-01

    A local survey conducted among scientists into the current practice of searching for information on Three Rs (i.e. Replacement, Reduction and Refinement) methods has highlighted the gap between the statutory requirement to apply Three Rs methods and the lack of criteria to search for them. To verify

  16. Assessing the Search for Information on Three Rs Methods, and their Subsequent Implementation: A National Survey among Scientists in The Netherlands.

    NARCIS (Netherlands)

    Luijk, J. van; Cuijpers, Y.M.; Vaart, L. van der; Leenaars, M.; Ritskes-Hoitinga, M.

    2011-01-01

    A local survey conducted among scientists into the current practice of searching for information on Three Rs (i.e. Replacement, Reduction and Refinement) methods has highlighted the gap between the statutory requirement to apply Three Rs methods and the lack of criteria to search for them. To verify

  17. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  18. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries

    Directory of Open Access Journals (Sweden)

    Han Bucong

    2012-11-01

    Full Text Available Abstract Background Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. Results We evaluated support vector machines (SVM as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33% of 13.56M PubChem, 1,496 (0.89% of 168 K MDDR, and 719 (7.73% of 9,305 MDDR compounds similar to the known inhibitors. Conclusions SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  19. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    Science.gov (United States)

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  20. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  1. Electrocution of Raptors on Power Lines: A Review of Necropsy Methods and Findings.

    Science.gov (United States)

    Kagan, R A

    2016-09-01

    Decades after the problem was first identified, power line electrocution continues to be a cause of avian mortality. Currently, several federal laws protect eagles and other migratory birds, meaning that utility companies may be liable for electrocution-related deaths. Veterinarians and veterinary pathologists called upon to diagnose and treat electrocuted birds should keep this in mind when conducting clinical and postmortem examinations. This review details necropsy findings and methods used to diagnose electrocution. A combination of gross, subgross, and radiographic examinations can aid in identification of subtle injury. Diagnosis is made based on the presence of skin and/or feather burns. Other necropsy findings may include skin lacerations, subcutaneous burns, bruising, limb avulsion, hemopericardium, and vascular rupture. At the US Fish and Wildlife Service's National Forensics Laboratory, from 2000 to 2015, 417 raptor deaths were determined to have been caused by electrocution. Bald eagles and golden eagles were the most commonly submitted species. In a retrospective review of 377 cases, for which whole bodies were submitted, 18% of the electrocuted birds had only a single, small (less than 3 cm in diameter) external burn. Small, isolated burns tended to occur on the undersides of the wings at and distal to the elbow and on the lower legs and feet. These areas should be most carefully examined in cases where electrocution injury is not immediately apparent. © The Author(s) 2016.

  2. First application of the spectral difference method for lifetime measurements of doppler attenuated line shapes

    Energy Technology Data Exchange (ETDEWEB)

    Duckwitz, Hannah [Institut fuer Kernphysik, Koeln Univ. (Germany); Petkov, Pavel [Bulgarian Academy of Sciences, Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria)

    2016-07-01

    In this new approach to lifetime measurements via Doppler attenuated line shapes, the spectra of a feeding f and a deexciting transition d of the level of interest are used to determine the lifetime without any lineshape analysis of the feeding transition (direct or indirect). Similarly to the DDC method, the decay function λ{sub d}n{sub d}(t) of the deexciting transition is determined. The feeding of the level is included via the spectral difference of the two successive decays. Consequently, the determined lifetime is the real lifetime. After transforming both transitions into the same energy region, their spectral difference D(v{sub θ}) = S{sub d}(v{sub θ})-S{sub f}(v{sub θ}) = ∫{sub 0}{sup ∞}(∂P{sub θ}(t,v{sub θ}))/(∂t)n{sub d}(t) dt, is solved for n{sub d}(t). Dividing n{sub d}(t) by the decay function λ{sub d}n{sub d}(t) should yield a constant τ value for the level lifetime as a function of the time t. After the development and test of the procedure in 2015, it is now applied for the first time. Two level lifetimes are determined in {sup 86}Sr for the 2{sup +}{sub 2} and the 2{sup +}{sub 3} levels.

  3. An applied optimization based method for line planning to minimize travel time

    DEFF Research Database (Denmark)

    Bull, Simon Henry; Rezanova, Natalia Jurjevna; Lusby, Richard Martin

    The line planning problem in rail is to select a number of lines froma potential pool which provides sufficient passenger capacity and meetsoperational requirements, with some objective measure of solution linequality. We model the problem of minimizing the average passenger systemtime, including...

  4. PROPOSAL OF METHOD FOR AN AUTOMATIC COMPLEMENTARITIES SEARCH BETWEEN COMPANIES' R&D

    OpenAIRE

    PAULO VINÍCIUS MARCONDES CORDEIRO; DARIO EDUARDO AMARAL DERGINT; KAZUO HATAKEYAMA

    2014-01-01

    Open innovation model is the best choice for the firms that cannot afford R&D costs but intent to continue playing the innovation game. This model offers to any firm the possibility to have companies spread worldwide and in all research fields as partners in R&D. However, the possible partnership can be restricted to the manager's know-who. Patent documents can be the source of rich information about technical development and innovation from a huge amount of firms. Search through all these da...

  5. Transient analysis of printed lines using finite-difference time-domain method

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Shahid [Thomas Jefferson National Accelerator Facility, 12050 Jefferson Avenue, Suite 704, Newport News, VA, 23606, USA

    2012-03-29

    Comprehensive studies of ultra-wideband pulses and electromagnetic coupling on printed coupled lines have been performed using full-wave 3D finite-difference time-domain analysis. Effects of unequal phase velocities of coupled modes, coupling between line traces, and the frequency dispersion on the waveform fidelity and crosstalk have been investigated in detail. To discriminate the contributions of different mechanisms into pulse evolution, single and coupled microstrip lines without (ϵr = 1) and with (ϵr > 1) dielectric substrates have been examined. To consistently compare the performance of the coupled lines with substrates of different permittivities and transients of different characteristic times, a generic metric similar to the electrical wavelength has been introduced. The features of pulse propagation on coupled lines with layered and pedestal substrates and on the irregular traces have been explored. Finally, physical interpretations of the simulation results are discussed in the paper.

  6. CALCULATION METHOD OF ELECTRIC POWER LINES MAGNETIC FIELD STRENGTH BASED ON CYLINDRICAL SPATIAL HARMONICS

    Directory of Open Access Journals (Sweden)

    A.V. Erisov

    2016-05-01

    Full Text Available Purpose. Simplification of accounting ratio to determine the magnetic field strength of electric power lines, and assessment of their environmental safety. Methodology. Description of the transmission lines of the magnetic field by using techniques of spatial harmonic analysis in the cylindrical coordinate system is carried out. Results. For engineering calculations of electric power lines magnetic field with sufficient accuracy describes their first spatial harmonic magnetic field. Originality. Substantial simplification of the definition of the impact of the construction of transmission line poles on the value of its magnetic field and the bands of land alienation sizes. Practical value. The environmentally friendly projection electric power lines on the level of the magnetic field.

  7. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    Science.gov (United States)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  8. Multivariate methods and the search for single top-quark production in association with a $W$ boson in ATLAS

    CERN Document Server

    Kovesarki, Peter; Dingfelder, Jochen

    This thesis describes three machine learning algorithms that can be used for physics analyses. The first is a density estimator that was derived from the Green’s function identity of the Laplace operator and is capable of tagging data samples according to the signal purity. This latter task can also be performed with regression methods, and such an algorithm was implemented based on fast multi-dimensional polynomial regression. The accuracy was improved with a decision tree using smooth boundaries. Both methods apply rigorous checks against overtraining to make sure the results are drawn from statistically significant features. These two methods were applied in the search for the single top-quark production with a $W$ boson. Their separation power differ highly in favour for the regression method, mainly because it can exploit the extra information available during training. The third method is an unsupervised learning algorithm that offers finding an optimal coordinate system for a sample in the sense of m...

  9. Search for a transport method for the calculation of the PWR control and safety clusters

    International Nuclear Information System (INIS)

    Bruna, G.B.; Van Frank, C.; Vergain, M.L.; Chauvin, J.P.; Palmiotti, G.; Nobile, M.

    1990-01-01

    The project studies of power reactors rely mainly on diffusion calculations, but transport ones are often needed for assessing fine effects, intimately linked to geometry and spectrum heterogeneities. Accurate transport computations are necessary, in particular, for shielded cross section generation, and when homogenization and dishomogenization processes are involved. The transport codes, generally, offer the user a variety of computational options, related to different approximation levels. In every case, it is obviously desirable to be able to choose the reliable degree of approximation to be accepted in any particular computational circumstance of the project. The search for such adapted procedures is to be made on the basis of critical experiments. In our studies, this task was made possible by the availability of suitable results of the CAMELEON critical experiment, carried on in the EOLE facility at CEA's Center of Cadarache. In this paper, we summarize some of the work in progress at FRAMATOME on the definition of an assembly based transport calculation scheme to be used for PWR control and safety cluster computations. Two main items, devoted to the search of the optimum computational procedures, are presented here: - a parametrical study on computational options, made in an infinite medium assembly geometry, - a series of comparisons between calculated and experimental values of pin power distribution

  10. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  11. Performance for the hybrid method using stochastic and deterministic searching for shape optimization of electromagnetic devices

    International Nuclear Information System (INIS)

    Yokose, Yoshio; Noguchi, So; Yamashita, Hideo

    2002-01-01

    Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)

  12. Comparisons of peak-search and photopeak-integration methods in the computer analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Baedecker, P.A.

    1980-01-01

    Myriad methods have been devised for extracting quantitative information from gamma-ray spectra by means of a computer, and a critical evaluation of the relative merits of the various programs that have been written would represent a Herculean, if not an impossible, task. The results from the International Atomic Energy Agency (IAEA) intercomparison, which may represent the most straightforward approach to making such an evaluation, showed a wide range in the quality of the results - even among laboratories where similar methods were used. The most clear-cut way of differentiating between programs is by the method used to evaluate peak areas: by the iterative fitting of the spectral features to an often complex model, or by a simple summation procedure. Previous comparisons have shown that relatively simple algorithms can compete favorably with fitting procedures, although fitting holds the greatest promise for the detection and measurement of complex peaks. However, fitting algorithms, which are generally complex and time consuming, are often ruled out by practical limitations based on the type of computing equipment available, cost limitations, the number of spectra to be processed in a given time period, and the ultimate goal of the analysis. Comparisons of methods can be useful, however, in helping to illustrate the limitations of the various algorithms that have been devised. This paper presents a limited review of some of the more common peak-search and peak-integration methods, along with Peak-search procedures

  13. Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem

    Directory of Open Access Journals (Sweden)

    Vatutin Eduard

    2017-12-01

    Full Text Available The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.

  14. Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem

    Science.gov (United States)

    Vatutin, Eduard

    2017-12-01

    The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.

  15. DISCOVERY OF NINE GAMMA-RAY PULSARS IN FERMI LARGE AREA TELESCOPE DATA USING A NEW BLIND SEARCH METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H. [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik, D-30167 Hannover (Germany); Guillemot, L.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; Freire, P. C. C. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, D-53121 Bonn (Germany); Ray, P. S. [Space Science Division, Naval Research Laboratory, Washington, DC 20375-5352 (United States); Belfiore, A.; Dormody, M. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Camilo, F. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Caraveo, P. A. [INAF-Istituto di Astrofisica Spaziale e Fisica Cosmica, I-20133 Milano (Italy); Celik, Oe.; Ferrara, E. C. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Hessels, J. W. T. [Astronomical Institute ' Anton Pannekoek' , University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands); Keith, M. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, Epping NSW 1710 (Australia); Kerr, M., E-mail: holger.pletsch@aei.mpg.de, E-mail: guillemo@mpifr-bonn.mpg.de [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); and others

    2012-01-10

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative, and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs J1803-2149 and J2111+ 4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 Multiplication-Sign 10{sup 35} erg s{sup -1} and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J0622+3749, J1620-4927, J1746-3239, J2028+3332, J2030+4415, and J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| > 10 Degree-Sign ). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2 Multiplication-Sign 10{sup 11} G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 Multiplication-Sign 10{sup 33} erg s{sup -1}) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  16. Discovery of Nine Gamma-Ray Pulsars in Fermi-Lat Data Using a New Blind Search Method

    Science.gov (United States)

    Celik-Tinmaz, Ozlem; Ferrara, E. C.; Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; hide

    2011-01-01

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient, and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs Jl803-2149 and J2111+4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 x 10(exp 35) ergs per second and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J010622+3749, Jl620-4927, Jl746-3239, J2028+3332,J2030+4415, J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| greater than 10 degrees). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2x 10(exp 11)G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 x l0(exp 33) erg per second) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  17. Radiating nonuniform transmission-line systems and the partial element equivalent circuit method

    CERN Document Server

    Nitsch, Juergen; Wollenberg, Gunter

    2009-01-01

    High frequencies of densely packed modern electronic equipment turn even the smallest piece of wire into a transmission line with signal retardation, dispersion, attenuation, and distortion. In electromagnetic environments with high-power microwave or ultra-wideband sources, transmission lines pick up noise currents generated by external electromagnetic fields. These are superimposed on essential signals, the lines acting not only as receiving antennas but radiating parts of the signal energy into the environment. This book is outstanding in its originality. While many textbooks rephrase

  18. Use of 15N dilution method for screening soybean lines with high yield and high nitrogen fixation ability

    International Nuclear Information System (INIS)

    Li Haixian; Li Xinmin; Danso, S.K.A.

    1998-01-01

    15 N dilution method was used for screening soybean lines with high nitrogen fixation ability. Screened lines 1005, 8502, 2096, 943, 1454 and Dongnong-42 have high nitrogen fixation ability with their % Ndfa of about 70%. 1454 and 1555 are both high yield and high nitrogen fixation lines. The ability of nitrogen fixation was not related to the yield, but related to maturing time. The cultivars with different maturing time have different levels of nitrogen fixation ability. The longer the maturing period is, the greater the ability of nitrogen fixation it has. There were ten cultivars or lines used in the test of 1992 and 1994. Although the weather condition were greatly different between the two years the results of seven cultivars or lines were the same, indicating that nitrogen fixation ability of the soybean is stable with years. Using 15 N dilution method to estimate nitrogen fixation ability of soybean is reliable, however, the % Ndfa of lines 8502 and 2096 increased by 19% in 1994, a rainy year, indicating that a change in % Ndfa with a few varieties maybe caused by weather

  19. Statistical data evaluation in mobile gamma spectrometry. An optimisation of on-line search strategies in the scenario of lost point sources

    International Nuclear Information System (INIS)

    Hjerpe, T.; Samuelsson, C.

    1999-01-01

    There is a potential risk that hazardous radioactive sources could enter the environment, e.g. via satellite debris, smuggled radioactive goods or lost metal scrap. From a radiation protection point of view there is a need for rapid and reliable methods for locating and identifying sources. Car-borne and air-borne detector systems are suitable for the task. The condition in this work is a situation where the missing radionuclide is known, which is not an unlikely scenario. The possibility that the source is located near a road can be high, and thus motivating a car-borne spectrometer system. The main object is to optimise on-line statistical methods in order to achieve a high probability for locating point sources, or hot spots, and still have reasonably few false alarms from variations in the natural background radiation. Data were obtained from a car-borne 3 litres (NaI(Tl) detector and two point sources, located at various distances from the road. The nuclides used were 137 Cs and 131 I. Spectra were measured stationary on the road. From these measurements spectra we have reconstructed spectra applicable to different speed and sampling times; the time 3 seconds and 50 km/h are used in this work. The maximum distance a source can be located from the road and still be detected is estimated with four different statistical analysis methods. This distance is called the detection distance, DD. The method is applied on gross counts in the full energy peak window. For each method alarm thresholds has been calculated from background data obtained in Scania (Skaane), in the south of Sweden. The results show a 30-50% difference in DD's. With this semi-theoretical approach, the two sources could be detected from 250 m ( 137 Cs, 6GBq) and 200 m ( 131 I, 4GBq). (au)

  20. Line facilities outline

    International Nuclear Information System (INIS)

    1998-08-01

    This book deals with line facilities. The contents of this book are outline line of wire telecommunication ; development of line, classification of section of line and theory of transmission of line, cable line ; structure of line, line of cable in town, line out of town, domestic cable and other lines, Optical communication ; line of optical cable, transmission method, measurement of optical communication and cable of the sea bottom, Equipment of telecommunication line ; telecommunication line facilities and telecommunication of public works, construction of cable line and maintenance and Regulation of line equipment ; regulation on technique, construction and maintenance.

  1. Development of a method of partial discharge detection in extra-high voltage cross-linked polyethylene insulated cable lines

    International Nuclear Information System (INIS)

    Katsuta, G.; Toya, A.; Muraoka, K.; Endoh, T.; Sekii, Y.; Ikeda, C.

    1992-01-01

    This paper reports that deterioration in the insulation performance of extra-high voltage XLPE cables is believed to be attributable to the deterioration caused by partial discharges. In the authors study, after using an XLPE cable to investigate the behavior of partial discharges under various adverse conditions, we succeeded in developing a highly sensitive new method of measuring partial discharge in XLPE cable lines. Partial discharges in a 275 kV XLPE cable live line has been measured using this newly developed method. As a result, a detection sensitivity of 1 pC has been achieved

  2. A Finite Element Method for Free-Surface Flows of Incompressible Fluids in Three Dimensions, Part II: Dynamic Wetting Lines

    Energy Technology Data Exchange (ETDEWEB)

    Baer, T.A.; Cairncross, R.A.; Rao, R.R.; Sackinger, P.A.; Schunk, P.R.

    1999-01-29

    To date, few researchers have solved three-dimensional free-surface problems with dynamic wetting lines. This paper extends the free-surface finite element method described in a companion paper [Cairncross, R.A., P.R. Schunk, T.A. Baer, P.A. Sackinger, R.R. Rao, "A finite element method for free surface flows of incompressible fluid in three dimensions, Part I: Boundary-Fitted mesh motion.", to be published (1998)] to handle dynamic wetting. A generalization of the technique used in two dimensional modeling to circumvent double-valued velocities at the wetting line, the so-called kinematic paradox, is presented for a wetting line in three dimensions. This approach requires the fluid velocity normal to the contact line to be zero, the fluid velocity tangent to the contact line to be equal to the tangential component of web velocity, and the fluid velocity into the web to be zero. In addition, slip is allowed in a narrow strip along the substrate surface near the dynamic contact line. For realistic wetting-line motion, a contact angle which varies with wetting speed is required because contact lines in three dimensions typically advance or recede a different rates depending upon location and/or have both advancing and receding portions. The theory is applied to capillary rise of static fluid in a corner, the initial motion of a Newtonian droplet down an inclined plane, and extrusion of a Newtonian fluid from a nozzle onto a moving substrate. The extrusion results are compared to experimental visualization. Subject Categories

  3. Temperature and vital effect controls on Bamboo coral (Isididae) isotopegeochemistry: A test of the "lines method"

    Energy Technology Data Exchange (ETDEWEB)

    Hill, T M; Spero, H J; Guilderson, T P; LaVigne, M; Clague, D; Macalello, S; Jang, N

    2011-03-01

    Deep-sea bamboo corals hold promise as long-term climatic archives, yet little information exists linking bamboo coral geochemistry to measured environmental parameters. This study focuses on a suite of 10 bamboo corals collected from the Pacific and Atlantic basins (250-2136 m water depth) to investigate coral longevity, growth rates, and isotopic signatures. Calcite samples for stable isotopes and radiocarbon were collected from the base the corals, where the entire history of growth is recorded. In three of the coral specimens, samples were also taken from an upper branch for comparison. Radiocarbon and growth band width analyses indicate that the skeletal calcite precipitates from ambient dissolved inorganic carbon and that the corals live for 150-300 years, with extension rates of 9-128 {micro}m/yr. A linear relationship between coral calcite {delta}{sup 18}O and {delta}{sup 13}C indicates that the isotopic composition is influenced by vital effects ({delta}{sup 18}O:{delta}{sup 13}C slope of 0.17-0.47). As with scleractinian deep-sea corals, the intercept from a linear regression of {delta}{sup 18}O versus {delta}{sup 13}C is a function of temperature, such that a reliable paleotemperature proxy can be obtained, using the 'lines method.' Although the coral calcite {delta}{sup 18}O:{delta}{sup 13}C slope is maintained throughout the coral base ontogeny, the branches and central cores of the bases exhibit {delta}{sup 18}O:{delta}{sup 13}C values that are shifted far from equilibrium. We find that a reliable intercept value can be derived from the {delta}{sup 18}O:{delta}{sup 13}C regression of multiple samples distributed throughout one specimen or from multiple samples within individual growth bands.

  4. Modeling and Analysis Methods for an On-line Enrichment Monitor

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Leon E.; Jarman, Kenneth D.; Wittman, Richard S.; Zalavadia, Mital A.; March-Leuba, Jose A.

    2016-05-30

    The International Atomic Energy Agency (IAEA) has developed an On-Line Enrichment Monitor (OLEM) as one possible component in a new generation of safeguards measures for uranium enrichment plants. The OLEM measures 235U emissions from the UF6 gas flowing through a unit header pipe using NaI(Tl) spectrometers, and corrects for gas density changes using pressure and temperature sensors in order to determine the enrichment of the gas as a function of time. In parallel with the OLEM instrument development, a Virtual OLEM (VOLEM) software tool has been developed that is capable of producing synthetic gamma-ray, pressure, and temperature data representative of a wide range of enrichment plant operating conditions. VOLEM complements instrument development activities and allows the study of OLEM for scenarios that will be difficult or impossible to evaluate empirically. Uses of VOLEM include: investigation of hardware design options; inter-comparison of candidate gamma-ray spectral analysis and enrichment estimation algorithms; uncertainty budget analysis and performance prediction for typical and atypical operational scenarios; and testing of the OLEM data acquisition, analysis and reporting software. This paper describes the technical foundations of VOLEM and illustrates how it can be used. An overview of the nominal instrument design and deployment scenario for OLEM is provided, with emphasis on the key online-assay measurement challenge: accurately determining the portion of the total 235U signal that comes from a background that includes solid uranium deposits on the piping walls. Monte Carlo modeling tools, data analysis algorithms and uncertainty quantification methods are described. VOLEM is then used to quantitatively explore the uncertainty budgets and predicted instrument performance for a plausible range of typical plant operating parameters, and one set of candidate analysis algorithms. Additionally, a series of VOLEM case studies illustrates how an online

  5. Search for transient ultralight dark matter signatures with networks of precision measurement devices using a Bayesian statistics method

    Science.gov (United States)

    Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.

    2018-04-01

    We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.

  6. A Novel Method of Autonomous Inspection for Transmission Line based on Cable Inspection Robot LiDAR Data

    Directory of Open Access Journals (Sweden)

    Xinyan Qin

    2018-02-01

    Full Text Available With the growth of the national economy, there is increasing demand for electricity, which forces transmission line corridors to become structurally complicated and extend to complex environments (e.g., mountains, forests. It is a great challenge to inspect transmission line in these regions. To address these difficulties, a novel method of autonomous inspection for transmission line is proposed based on cable inspection robot (CIR LiDAR data, which mainly includes two steps: preliminary inspection and autonomous inspection. In preliminary inspection, the position and orientation system (POS data is used for original point cloud dividing, ground point filtering, and structured partition. A hierarchical classification strategy is established to identify the classes and positions of the abnormal points. In autonomous inspection, CIR can autonomously reach the specified points through inspection planning. These inspection targets are imaged with PTZ (pan, tilt, zoom cameras by coordinate transformation. The feasibility and effectiveness of the proposed method are verified by test site experiments and actual line experiments, respectively. The proposed method greatly reduces manpower and improves inspection accuracy, providing a theoretical basis for intelligent inspection of transmission lines in the future.

  7. Earthquake effect on volcano and the geological structure in central java using tomography travel time method and relocation hypocenter by grid search method

    International Nuclear Information System (INIS)

    Suharsono; Nurdian, S. W; Palupi, I. R.

    2016-01-01

    Relocating hypocenter is a way to improve the velocity model of the subsurface. One of the method is Grid Search. To perform the distribution of the velocity in subsurface by tomography method, it is used the result of relocating hypocenter to be a reference for subsurface analysis in volcanic and major structural patterns, such as in Central Java. The main data of this study is the earthquake data recorded from 1952 to 2012 with the P wave number is 9162, the number of events is 2426 were recorded by 30 stations located in the vicinity of Central Java. Grid search method has some advantages they are: it can relocate the hypocenter more accurate because this method is dividing space lattice model into blocks, and each grid block can only be occupied by one point hypocenter. Tomography technique is done by travel time data that has had relocated with inversion pseudo bending method. Grid search relocated method show that the hypocenter's depth is shallower than before and the direction is to the south, the hypocenter distribution is modeled into the subduction zone between the continent of Eurasia with the Indo-Australian with an average angle of 14 °. The tomography results show the low velocity value is contained under volcanoes with value of -8% to -10%, then the pattern of the main fault structure in Central Java can be description by the results of tomography at high velocity that is from 8% to 10% with the direction is northwest and northeast-southwest. (paper)

  8. Determination of the delivered hemodialysis dose using standard methods and on-line clearance monitoring

    Directory of Open Access Journals (Sweden)

    Vlatković Vlastimir

    2006-01-01

    Full Text Available Background/aim: Delivered dialysis dose has a cumulative effect and significant influence upon the adequacy of dialysis, quality of life and development of co-morbidity at patients on dialysis. Thus, a great attention is given to the optimization of dialysis treatment. On-line Clearance Monitoring (OCM allows a precise and continuous measurement of the delivered dialysis dose. Kt/V index (K = dialyzer clearance of urea; t = dialysis time; V = patient's total body water, measured in real time is used as a unit for expressing the dialysis dose. The aim of this research was to perform a comparative assessment of the delivered dialysis dose by the application of the standard measurement methods and a module for continuous clearance monitoring. Methods. The study encompassed 105 patients who had been on the chronic hemodialysis program for more than three months, three times a week. By random choice, one treatment per each controlled patient was taken. All the treatments understood bicarbonate dialysis. The delivered dialysis dose was determined by the calculation of mathematical models: Urea Reduction Ratio (URR singlepool index Kt/V (spKt/V and by the application of OCM. Results. Urea Reduction Ratio was the most sensitive parameter for the assessment and, at the same time, it was in the strongest correlation with the other two, spKt/V indexes and OCM. The values pointed out an adequate dialysis dose. The URR values were significantly higher in women than in men, p < 0.05. The other applied model for the delivered dialysis dose measurement was Kt/V index. The obtained values showed that the dialysis dose was adequate, and that, according to this parameter, the women had significantly better dialysis, then the men p < 0.05. According to the OCM, the average value was slightly lower than the adequate one. The women had a satisfactory dialysis according to this index as well, while the delivered dialysis dose was insufficient in men. The difference

  9. The activity-integrated method for quality assessment of reduning injection by on-line DPPH-CE-DAD.

    Directory of Open Access Journals (Sweden)

    Yan-xu Chang

    Full Text Available A sensitive on-line DPPH-CE-DAD method was developed and validated for both screening and determining the concentration of seven antioxidants of Reduning injection. The pH and concentrations of buffer solution, SDS, β-CD and organic modifier were studied for the detection of DPPH and seven antioxidants. By on-line mixing DPPH and sample solution, a DPPH-CE method for testing the antioxidant activity of the complex matrix was successfully established and used to screen the antioxidant components of Reduning injection. Then, antioxidant components including caffeic acid, isochlorogenic acid A, isochlorogenic acid B, isochlorogenic acid C, chlorogenic acid, neochlorogenic acid and cryptochlorogenic acid were quantified by the newly established CE-DAD method. Finally, the total antioxidant activity and the multiple active components were selected as markers to evaluate the quality of Reduning injection. The results demonstrated that the on-line DPPH-CE-DAD method was reagent-saving, rapid and feasible for on-line simultaneous determination of total pharmacological activity and contents of multi-components samples. It was also a powerful method for evaluating the quality control and mechanism of action of TCM injection.

  10. A modified harmony search method for environmental/economic load dispatch of real-world power systems

    International Nuclear Information System (INIS)

    Jeddi, Babak; Vahidinasab, Vahid

    2014-01-01

    Highlights: • A combined economic and emission load dispatch (CEELD) model is proposed. • The proposed model considers practical constraints of real-world power systems. • A new modified harmony search algorithm proposed to solve non-convex CEELD. • The proposed algorithm is tested by applying it to solve seven test systems. - Abstract: Economic load dispatch (ELD) problem is one of the basic and important optimization problems in a power system. However, considering practical constraints of real-world power systems such as ramp rate limits, prohibited operating zones, valve loading effects, multi-fuel options, spinning reserve and transmission system losses in ELD problem makes it a non-convex optimization problem, which is a challenging one and cannot be solved by traditional methods. Moreover, considering environmental issues, results in combined economic and emission load dispatch (CEELD) problem that is a multiobjective optimization model with two non-commensurable and contradictory objectives. In this paper, a modified harmony search algorithm (MHSA) proposed and applied to solve ELD and CEELD problem considering the abovementioned constraints. In the proposed MHSA, a new improvising method based on wavelet mutation together with a new memory consideration scheme based on the roulette wheel mechanism are proposed which improves the accuracy, convergence speed, and robustness of the classical HSA. Performance of the proposed algorithm is investigated by applying it to solve various test systems having non-convex solution spaces. To Show the effectiveness of the proposed method, obtained results compared with classical harmony search algorithm (HSA) and some of the most recently published papers in the area

  11. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    Science.gov (United States)

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  12. THE METHOD OF APPLICATION OF A COLLECTIVE SEARCH ACTIVITY AS A TOOL DEVELOPING METHODOLOGICAL THINKING OF A TEACHER

    Directory of Open Access Journals (Sweden)

    Ibragimova Luiza Vahaevna

    2013-02-01

    Full Text Available To realize any pedagogical theory into practice it is necessary to transform the theoretical concepts in teaching methods. The development of all abilities, including thinking, occurs only in the activity, which is specially organized by creating the required pedagogical conditions, in this case – it is a the application of enhanced mental activity in teachers training course and vocational training b establishment of a "virtual university" for teachers in an institute of professional training c the organization of interdisciplinary interaction of teachers, based on conditions of the nonlinear didactics (training teachers of different subjects. The presented method is implemented for two years and consists of three phases: the motivational and educational, intellectual and developmental, innovative and reflective. At the motivational and educational stage, possibilities of collective search activity actualize during the course of training, group goals are set and chosen methods of their achieving by using the first pedagogical conditions. At intellectual and developmental stage, the development of skills to the collective search for effective teaching decisions during intercourse training with the first-and second-pedagogical conditions is carried out. The innovative step is the promotion of teachers to self-determination of techniques and tools that improve the quality of the educational process, providing assistance to each other in the development of teaching manuals, which is achieved with the help of all three pedagogical conditions.

  13. THE METHOD OF APPLICATION OF A COLLECTIVE SEARCH ACTIVITY AS A TOOL DEVELOPING METHODOLOGICAL THINKING OF A TEACHER

    Directory of Open Access Journals (Sweden)

    Луиза Вахаевна Ибрагимова

    2013-04-01

    Full Text Available To realize any pedagogical theory into practice it is necessary to transform the theoretical concepts in teaching methods. The development of all abilities, including thinking, occurs only in the activity, which is specially organized by creating the required pedagogical conditions, in this case – it is a the application of enhanced mental activity in teachers training course and vocational training b establishment of a "virtual university" for teachers in an institute of professional training c the organization of interdisciplinary interaction of teachers, based on conditions of the nonlinear didactics (training teachers of different subjects. The presented method is implemented for two years and consists of three phases: the motivational and educational, intellectual and developmental, innovative and reflective. At the motivational and educational stage, possibilities of collective search activity actualize during the course of training, group goals are set and chosen methods of their achieving by using the first pedagogical conditions. At intellectual and developmental stage, the development of skills to the collective search for effective teaching decisions during intercourse training with the first-and second-pedagogical conditions is carried out. The innovative step is the promotion of teachers to self-determination of techniques and tools that improve the quality of the educational process, providing assistance to each other in the development of teaching manuals, which is achieved with the help of all three pedagogical conditions.DOI: http://dx.doi.org/10.12731/2218-7405-2013-2-17

  14. Methods for a systematic, comprehensive search for fast, heavy scintillator materials

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Moses, W.W.; Weber, M.J.; West, A.C.

    1994-01-01

    Over the years a number of scintillator materials have been developed for a wide variety of nuclear detection applications in industry, high energy physics, and medical instrumentation. To expand the list of useful scintillators, the authors are pursuing the following systematic, comprehensive search: (1) select materials with good gamma-ray interaction properties from the 200,000 data set NIST crystal diffraction file, (2) synthesize samples (doped and undoped) in powdered or single crystal form, (3) test the samples using sub-nanosecond pulsed x-rays to measure important scintillation properties such as rise times, decay times, emission wavelengths, and light output, (4) prepare large, high quality crystals of the most promising candidates, and (5) test the crystals as gamma-ray detectors in representative configurations. An important parallel effort is the computation of electronic energy levels of activators and the band structure of intrinsic and host crystals to aid in the materials selection process. In this paper the authors interested mainly in scintillator materials for detecting 511 keV gamma rays in positron emission tomography

  15. Theoretical Investigation of Combined Use of PSO, Tabu Search and Lagrangian Relaxation methods to solve the Unit Commitment Problem

    Directory of Open Access Journals (Sweden)

    Sahbi Marrouchi

    2018-02-01

    Full Text Available Solving the Unit Commitment problem (UCP optimizes the combination of production units operations and determines the appropriate operational scheduling of each production units to satisfy the expected consumption which varies from one day to one month. Besides, each production unit is conducted to constraints that render this problem complex, combinatorial and nonlinear. In this paper, we proposed a new strategy based on the combination three optimization methods: Tabu search, Particle swarm optimization and Lagrangian relaxation methods in order to develop a proper unit commitment scheduling of the production units while reducing the production cost during a definite period. The proposed strategy has been implemented on a the IEEE 9 bus test system containing 3 production unit and the results were promising compared to strategies based on meta-heuristic and deterministic methods.

  16. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  17. Shielding Effectiveness Analysis and Modification of the Coupling Effect Transmission Line Method on Cavities with Multi-Sided Apertures

    Directory of Open Access Journals (Sweden)

    Tao Hu

    2018-04-01

    Full Text Available Because the traditional transmission line method treats electromagnetic waves as excitation sources and the cavity as a rectangular waveguide whose terminal is shorted, the transmission line method can only calculate shielding effectiveness in the center line of the cavity with apertures on one side. In this paper, the aperture coupling effect of different sides was analyzed based on vector analysis. According to the field intensity distribution of different transport modes in the rectangular waveguide, the calculation model of cavity shielding effectiveness in any position is proposed, which can solve the question of the calculation model of shielding effectiveness in any position in the traditional method of equivalent transmission methods. Further expansion of the equivalent transmission lines model is adopted to study the shielding effectiveness of different aperture cavities, and the coupling effect rule of the incident angle, the number of apertures, and the size of the cavity is obtained, which can provide the technical support for the design of electromagnetic shielding cavities for electronic equipment.

  18. THE METHODS OF CALCULATIONS OF THE TEMPERATURE BREAKDOWN FIELD IN THE LINE OF THE MODEM HIGH-SPEED WIRE MILL

    Directory of Open Access Journals (Sweden)

    S. M. Zhuchkov

    2007-01-01

    Full Text Available The calculation methods of the temperature field of the breakdown, being rolled in lines of the modern high-speed wire mill, is developed on the basis of solving of problem of the contact exchange of hot metal with cold rollers.

  19. From the Bronx to Bengifunda (and Other Lines of Flight): Deterritorializing Purposes and Methods in Science Education Research

    Science.gov (United States)

    Gough, Noel

    2011-01-01

    In this essay I explore a number of questions about purposes and methods in science education research prompted by my reading of Wesley Pitts' ethnographic study of interactions among four students and their teacher in a chemistry classroom in the Bronx, New York City. I commence three "lines of flight" (small acts of Deleuzo-Guattarian…

  20. 26 CFR 1.9001-1 - Change from retirement to straight-line method of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... of computing depreciation. 1.9001-1 Section 1.9001-1 Internal Revenue INTERNAL REVENUE SERVICE....9001-1 Change from retirement to straight-line method of computing depreciation. (a) In general. The... of deductions for the depreciation of those roadway assets which are defined in this section as...

  1. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    DEFF Research Database (Denmark)

    Nathan, J.; Meyer Forsting, Alexander Raul; Troldborg, Niels

    2017-01-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the...

  2. Searching for Rigour in the Reporting of Mixed Methods Population Health Research: A Methodological Review

    Science.gov (United States)

    Brown, K. M.; Elliott, S. J.; Leatherdale, S. T.; Robertson-Wilson, J.

    2015-01-01

    The environments in which population health interventions occur shape both their implementation and outcomes. Hence, when evaluating these interventions, we must explore both intervention content and context. Mixed methods (integrating quantitative and qualitative methods) provide this opportunity. However, although criteria exist for establishing…

  3. Heuristic methods using grasp, path relinking and variable neighborhood search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2013-08-01

    Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.

  4. A new method of organizing spectral line intensity ratio fluctuations of nightglow emissions

    International Nuclear Information System (INIS)

    Thelin, B.

    1986-02-01

    In this paper a new kind of linearization effect between the atmospheric night airglow emissions is presented. The same kind of linearization effect has previously been studied with spectrochemical light sources together with a spectrometer. A linear graph was obtained for atomic spectral lines and vibrational bandspectra when the spectral line intensity ratio fluctuations were plotted versus the photon energies of these emissions. To study this effect data from a number of different photometer investigations of night airglow emissions at different times and places have been used. (author)

  5. fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages

    Science.gov (United States)

    Hoffmann, Thomas J.; Laird, Nan M.

    2009-01-01

    The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291

  6. Comparison of the multi-drug resistant human hepatocellular carcinoma cell line Bel-7402/ADM model established by three methods

    Directory of Open Access Journals (Sweden)

    Zhong Xingguo

    2010-08-01

    Full Text Available Abstract Background To compare the biological characteristics of three types of human hepatocellular carcinoma multi-drug resistant cell sub-lines Bel-7402/ADM models established by three methods. Methods Established human hepatocellular carcinoma adriamycin (ADM multi-drug resistant cell sub-lines models Bel-7402/ADMV, Bel-7402/ADML and Bel-7402/ADMS by three methods of in vitro concentration gradient increased induction, nude mice liver-implanted induction and subcutaneous-implanted induction respectively. Phase contrast microscopy was used to observe the cells and the MTT (methyl thiazolyl tetrazolium method was used to detect drug resistance of the three different sub-lines of cells. Results The three groups of drug resistant cells, Bel-7402/ADMV, Bel-7402/ADML and Bel-7402/ADMS generated cross-resistance to ADM and CDDP (cis-Diaminedichloroplatinum, but showed a significant difference in resistance to Bel-7402 IC50 value (P V, 46 h (Bel-7402/ADML, and 45 h (Bel-7402/ADMS. The excretion rates of ADM were significantly increased compared with the parent cell (34.14% line and were 81.06% (Bel-7402/ADMV, 66.56% (Bel-7402/ADML and 61.56% (Bel-7402/ADMS. Expression of P-gp and MRP in the three groups of resistant cells was significantly enhanced (P P > 0.05. Conclusions Stable resistance was involved in the resistant cell line model established by the above three methods. Liver implantation was a good simulation of human hepatocellular and proved to be an ideal model with characteristics similar to human hepatocellular biology and the pharmacokinetics of anticancer drugs.

  7. REST: a computer system for estimating logging residue by using the line-intersect method

    Science.gov (United States)

    A. Jeff Martin

    1975-01-01

    A computer program was designed to accept logging-residue measurements obtained by line-intersect sampling and transform them into summaries useful for the land manager. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

  8. Optical modeling of fiber organic photovoltaic structures using a transmission line method.

    Science.gov (United States)

    Moshonas, N; Stathopoulos, N A; O'Connor, B T; Bedeloglu, A Celik; Savaidis, S P; Vasiliadis, S

    2017-12-01

    An optical model has been developed and evaluated for the calculation of the external quantum efficiency of cylindrical fiber photovoltaic structures. The model is based on the transmission line theory and has been applied on single and bulk heterojunction fiber-photovoltaic cells. Using this model, optimum design characteristics have been proposed for both configurations, and comparison with experimental results has been assessed.

  9. Method and cell lines for the production of monoclonal antibodies to human glycophorin A

    Science.gov (United States)

    Bigbee, W.L.; Fong, S.S.N.; Jensen, R.H.; Vanderlaan, M.

    Cloned mouse hybridoma cell lines have been established which continuously produce antibodies that differentiate between the M and N forms of human glycophorin A. These antibodies have potential application as human blood group reagents, as markers for terminally differentiated erythroid cells and as immunofluorescent labels of somatically variant human erythrocytes.

  10. Selective Method for the Determination of Manganese in End-fitting of Spoolable Reinforced Plastic Line Pipe for Petroleum Industries

    Science.gov (United States)

    Shao, Xiaodong; Zhang, Dongna; Li, Houbu; Cai, Xuehua

    2017-10-01

    The fact that spoolable reinforced plastic line pipe is more flexible and spoolable than steel, and is also much lighter, means that it can becarried and deployedfrom smaller vessels and managed more easily. It was well known that manganese is an important element in end-fitting of spoolable reinforced plastic line pipe. In this paper, a simple spectrophotometric method was described for the determination of manganese in end-fitting of spoolable reinforced plastic line pipe. The method was based on the oxidation-reduction reaction between ammonium persulfate and manganese(II) producing manganese(VII) in the presence of silver nitrate as a catalyst. The characteristic wavelength of maximum absorption of manganese(VII) was obtained locating at 530 nm. Under the optimum reaction conditions the absorption value was proportional to the concentration of manganese in the range of 0.50%˜1.80% (R2 = 0.9997), and the relative standard deviation was less than 3.0% (n=5). The proposed method was applied successfully to determine manganese in end-fitting of spoolable reinforced plastic line pipe samples.

  11. On-Line Multi-Damage Scanning Spatial-Wavenumber Filter Based Imaging Method for Aircraft Composite Structure

    Directory of Open Access Journals (Sweden)

    Yuanqiang Ren

    2017-05-01

    Full Text Available Structural health monitoring (SHM of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.

  12. Self-guided method to search maximal Bell violations for unknown quantum states

    Science.gov (United States)

    Yang, Li-Kai; Chen, Geng; Zhang, Wen-Hao; Peng, Xing-Xiang; Yu, Shang; Ye, Xiang-Jun; Li, Chuan-Feng; Guo, Guang-Can

    2017-11-01

    In recent decades, a great variety of research and applications concerning Bell nonlocality have been developed with the advent of quantum information science. Providing that Bell nonlocality can be revealed by the violation of a family of Bell inequalities, finding maximal Bell violation (MBV) for unknown quantum states becomes an important and inevitable task during Bell experiments. In this paper we introduce a self-guided method to find MBVs for unknown states using a stochastic gradient ascent algorithm (SGA), by parametrizing the corresponding Bell operators. For three investigated systems (two qubit, three qubit, and two qutrit), this method can ascertain the MBV of general two-setting inequalities within 100 iterations. Furthermore, we prove SGA is also feasible when facing more complex Bell scenarios, e.g., d -setting d -outcome Bell inequality. Moreover, compared to other possible methods, SGA exhibits significant superiority in efficiency, robustness, and versatility.

  13. Foreshock search over a long duration using a method of setting appropriate criteria

    Science.gov (United States)

    Toyomoto, Y.; Kawakata, H.; Hirano, S.; Doi, I.

    2016-12-01

    Recently, small foreshocks have been detected using cross-correlation techniques (e.g., Bouchon et al., 2011) in which the foreshocks are identified when the cross-correlation coefficient (CC) exceeded a certain threshold. For some shallow intraplate earthquakes, foreshocks whose hypocenters were estimated to be adjacent to the main shock hypocenter were detected from several tens of minutes before the main shock occurrence (Doi and Kawakata, 2012; 2013). At least two problems remain in the cross-correlation techniques employed. First, previous studies on foreshocks used data whose durations are at most a month (Kato et al., 2013); this is insufficient to check if such events occurred only before the main shock occurrence or not. Second, CC is used for detection criteria without considering validity of the threshold. In this study, we search for foreshocks of an M 5.4 earthquake in central Nagano prefecture in Japan on June 30, 2011 with a vertical-component waveform at N.MWDH (Hi-net) station due to one of the cataloged foreshocks (M 1) as a template to calculate CC. We calculate CC between the template and continuous waveforms of the same component at the same station for two years before the main shock occurrence, and we try to overcome the problems mentioned above. We find that histogram of CC is well modeled with the normal distribution, which is similar to previous studies on tremors (e.g., Ohta and Ide, 2008). According to the model, the expected number of misdetection is less than 1 when CC > 0.63. Therefore, we regard that the waveform is due to a foreshock when CC > 0.63. As a result, foreshocks are detected only within thirteen hours immediately before the main shock occurrence for the two years. By setting an appropriate threshold, we conclude that foreshocks just before the main shock occurrence are not stationary events. Acknowledgments: We use continuous waveform records of NIED high sensitivity seismograph network in Japan (Hi-net) and the JMA

  14. Untargeted metabolomic profiling plasma samples of patients with lung cancer for searching significant metabolites by HPLC-MS method

    Science.gov (United States)

    Dementeva, N.; Ivanova, K.; Kokova, D.; Kurzina, I.; Ponomaryova, A.; Kzhyshkowska, J.

    2017-09-01

    Lung cancer is one of the most common types of cancer leading to death. Consequently, the search and the identification of the metabolites associated with the risk of developing cancer are very valuable. For the purpose, untargeted metabolic profiling of the plasma samples collected from the patients with lung cancer (n = 100) and the control group (n = 100) was conducted. After sample preparation, the plasma samples were analyzed using LC-MS method. Biostatistics methods were applied to pre-process the data for elicitation of dominating metabolites which responded to the difference between the case and the control groups. At least seven significant metabolites were evaluated and annotated. The most part of identified metabolites are connected with lipid metabolism and their combination could be useful for follow-up studies of lung cancer pathogenesis.

  15. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  16. Multivariate methods and the search for single top-quark production in association with a W boson in ATLAS

    International Nuclear Information System (INIS)

    Koevesarki, Peter

    2012-11-01

    This thesis describes three machine learning algorithms that can be used for physics analyses. The first is a density estimator that was derived from the Green's function identity of the Laplace operator and is capable of tagging data samples according to the signal purity. This latter task can also be performed with regression methods, and such an algorithm was implemented based on fast multi-dimensional polynomial regression. The accuracy was improved with a decision tree using smooth boundaries. Both methods apply rigorous checks against overtraining to make sure the results are drawn from statistically significant features. These two methods were applied in the search for the single top-quark production with a W boson. Their separation power differ highly in favour for the regression method, mainly because it can exploit the extra information available during training. The third method is an unsupervised learning algorithm that offers finding an optimal coordinate system for a sample in the sense of maximal information entropy, which may aid future methods to model data.

  17. Multivariate methods and the search for single top-quark production in association with a W boson in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Koevesarki, Peter

    2012-11-15

    This thesis describes three machine learning algorithms that can be used for physics analyses. The first is a density estimator that was derived from the Green's function identity of the Laplace operator and is capable of tagging data samples according to the signal purity. This latter task can also be performed with regression methods, and such an algorithm was implemented based on fast multi-dimensional polynomial regression. The accuracy was improved with a decision tree using smooth boundaries. Both methods apply rigorous checks against overtraining to make sure the results are drawn from statistically significant features. These two methods were applied in the search for the single top-quark production with a W boson. Their separation power differ highly in favour for the regression method, mainly because it can exploit the extra information available during training. The third method is an unsupervised learning algorithm that offers finding an optimal coordinate system for a sample in the sense of maximal information entropy, which may aid future methods to model data.

  18. An efficient method for the creation of tunable optical line traps via control of gradient and scattering forces.

    Science.gov (United States)

    Tietjen, Gregory T; Kong, Yupeng; Parthasarathy, Raghuveer

    2008-07-07

    Interparticle interaction energies and other useful physical characteristics can be extracted from the statistical properties of the motion of particles confined by an optical line trap. In practice, however, the potential energy landscape, U(x), imposed by the line provides an extra, and in general unknown, influence on particle dynamics. We describe a new class of line traps in which both the optical gradient and scattering forces acting on a trapped particle are designed to be linear functions of the line coordinate and in which their magnitude can be counterbalanced to yield a flat U(x). These traps are formed using approximate solutions to general relations concerning non-conservative optical forces that have been the subject of recent investigations [Y. Roichman, B. Sun, Y. Roichman, J. Amato-Grill, and D. G. Grier, Phys. Rev. Lett. 100, 013602-4 (2008).]. We implement the lines using holographic optical trapping and measure the forces acting on silica microspheres, demonstrating the tunability of the confining potential energy landscape. Furthermore, we show that our approach efficiently directs available laser power to the trap, in contrast to other methods.

  19. I-line stepper based overlay evaluation method for wafer bonding applications

    Science.gov (United States)

    Kulse, P.; Sasai, K.; Schulz, K.; Wietstruck, M.

    2018-03-01

    In the last decades the semiconductor technology has been driven by Moore's law leading to high performance CMOS technologies with feature sizes of less than 10 nm [1]. It has been pointed out that not only scaling but also the integration of novel components and technology modules into CMOS/BiCMOS technologies is becoming more attractive to realize smart and miniaturized systems [2]. Driven by new applications in the area of communication, health and automation, new components and technology modules such as BiCMOS embedded RF-MEMS, high-Q passives, Sibased microfluidics and InP-SiGe BiCMOS heterointegration have been demonstrated [3-6]. In contrast to standard VLSI processes fabricated on front side of the silicon wafer, these new technology modules additionally require to process the backside of the wafer; thus require an accurate alignment between the front and backside of the wafer. In previous work an advanced back to front side alignment technique and implementation into IHP's 0.25/0.13 µm high performance SiGe:C BiCMOS backside process module has been presented [7]. The developed technique enables a high resolution and accurate lithography on the backside of BiCMOS wafer for additional backside processing. In addition to the aforementioned back side process technologies, new applications like Through-Silicon Vias (TSV) for interposers and advanced substrate technologies for 3D heterogeneous integration demand not only single wafer fabrication but also processing of wafer stacks provided by temporary and permanent wafer bonding [8-9]. In this work, the non-contact infrared alignment system of the Nikon® i-line Stepper NSR-SF150 for both alignment and the overlay determination of bonded wafer stacks with embedded alignment marks are used to achieve an accurate alignment between the different wafer sides. The embedded field image alignment (FIA) marks of the interface and the device wafer top layer are measured in a single measurement job. By taking the

  20. A new method of organizing spectral line intensity ratio fluctuations of auroral emissions

    International Nuclear Information System (INIS)

    Thelin, B.

    1986-02-01

    In this paper a new kind of linearization effect between the atmospheric auroral emissions is presented. The same kind of linearization effect has previously been found in nightglow emissions from photometer measurements and in the spectrochemical field from studies of optical light sources. Linear graphs have been obtained for atomic spectral lines and vibrational bandspectra when the spectral line ratio fluctuations were plotted versus the photon energies of these emissions. This new effect has been studied with a spectrophotometer in auroral emissions, where linear graphs have been obtained on different auroral occasions. By doing such studies of auroral light it is possible to see the importance of the inelastic scattering cross section between electrons - atoms and electrons - molecules. In this way it has shown to be possible to determine the mean energy of the interacting thermal electrons that are active in the different auroral phases. (author)

  1. Line scanning analysis of Dilingtou Yue porcelain in Southern Song Dynasty by SRXRF method

    International Nuclear Information System (INIS)

    Zhu Shoumei; Mao Zhenwei; Feng Min; Zhu Jian; Ling Xue; Sheng Yueming; Huang Yuying; He Wei

    2004-01-01

    The SRXRF line scanning analysis technique was used to measure the elements' content in the Dilingtou Yue porcelain sherd of Southern Song Dynasty from glaze to body. It is found that there exists a middle layer between the porcelain glaze and body. And a line scanning analysis comparison was made with the Ru porcelain in Northern Song Dynasty. There are some differences between them and the middle layer of Dilingtou Yue sherd is a little thinner than the Ru sherd's. And elements' content distributions in glaze are different from Ru porcelain's and the changes of contents from the surface to the inner side of glaze are undulatory. The results indicate that the differences may result from the different materials of body and the different kilns. (authors)

  2. Edge and line detection of complicated and blurred objects

    OpenAIRE

    Haugsdal, Kari

    2010-01-01

    This report deals with edge and line detection in pictures with complicated and/or blurred objects. It explores the alternatives available, in edge detection, edge linking and object recognition. Choice of methods are the Canny edge detection and Local edge search processing combined with regional edge search processing in the form of polygon approximation.

  3. Short segment search method for phylogenetic analysis using nested sliding windows

    Science.gov (United States)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  4. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    International Nuclear Information System (INIS)

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-01-01

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  5. Searching for degenerate Higgs bosons using a profile likelihood ratio method

    CERN Document Server

    Heikkilä, Jaana

    ATLAS and CMS collaborations at the Large Hadron Collider have observed a new resonance con- sistent with the standard model Higgs boson. However, it has been suggested that the observed signal could also be produced by multiple nearly mass-degenerate states that couple differently to the standard model particles. In this work, a method to discriminate between the hypothesis of a single Higgs boson and that of multiple mass-degenerate Higgs bosons was developed. Using the matrix of measured signal strengths in different production and decay modes, parametrizations for the two hypotheses were constructed as a general rank 1 matrix and the most general $5 \\times 4$ matrix, respectively. The test statistic was defined as a ratio of profile likelihoods for the two hypotheses. The method was applied to the CMS measurements. The expected test statistic distribution was estimated twice by generating pseudo-experiments according to both the standard model hypothesis and the single Higgs boson hypothesis best fitting...

  6. Intelligent Search Method Based ACO Techniques for a Multistage Decision Problem EDP/LFP

    Directory of Open Access Journals (Sweden)

    Mostefa RAHLI

    2006-07-01

    Full Text Available The implementation of a numerical library of calculation based optimization in electrical supply networks area is in the centre of the current research orientations, thus, our project in a form given is centred on the development of platform NMSS1. It's a software environment which will preserve many efforts as regards calculations of charge, smoothing curves, losses calculation and economic planning of the generated powers [23].The operational research [17] in a hand and the industrial practice in the other, prove that the means and processes of simulation reached a level of very appreciable reliability and mathematical confidence [4, 5, 14]. It is of this expert observation that many processes make confidence to the results of simulation.The handicaps of this approach or methodology are that it makes base its judgments and handling on simplified assumptions and constraints whose influence was deliberately neglected to be added to the cost to spend [14].By juxtaposing the methods of simulation with artificial intelligence techniques, gathering set of numerical methods acquires an optimal reliability whose assurance can not leave doubt.Software environment NMSS [23] can be a in the field of the rallying techniques of simulation and electric network calculation via a graphic interface. In the same software integrate an AI capability via a module expert system.Our problem is a multistage case where are completely dependant and can't be performed separately.For a multistage problem [21, 22], the results obtained from a credible (large size problem calculation, makes the following question: Could choice of numerical methods set make the calculation of a complete problem using more than two treatments levels, a total error which will be the weakest one possible? It is well-known according to algorithmic policy; each treatment can be characterized by a function called mathematical complexity. This complexity is in fact a coast (a weight overloading

  7. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  8. Detection method of proximal caries using line profile in digital intra-oral radiography

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Suk; Kim, Gyu Tae; Hwang, Eui Hwan; Lee, Min Ja; Choi, Sam Jin; Park, Hun Kuk [Department of Oral and Maxillofacial Radiology, School of Dentistry and Institute of Oral Biology, Kyung Hee University, Seoul (Korea, Republic of); Park, Jeong Hoon [Department of Biomedical Engineering, College of Medicine, Kyung Hee University, Seoul (Korea, Republic of)

    2009-12-15

    The purpose of this study was to investigate how to detect proximal caries using line profile and validate linear measurements of proximal caries lesions by basic digital manipulation of radiographic images. The X-ray images of control group (15) and caries teeth (15) from patients were used. For each image, the line profile at the proximal caries-susceptible zone was calculated. To evaluate the contrast as a function of line profile to detect proximal caries, a difference coefficient (D) that indicates the relative difference between caries and sound dentin or intact enamel was measured. Mean values of D were 0.0354 {+-} 0.0155 in non-caries and 0.2632 {+-} 0.0982 in caries (p<0.001). The mean values of caries group were higher than non-caries group and there was correlation between proximal dental caries and D. It is demonstrated that the mean value of D from caries group was higher than that of control group. From the result, values of D possess great potentiality as a new detection parameter for proximal dental caries.

  9. Detection method of proximal caries using line profile in digital intra-oral radiography

    International Nuclear Information System (INIS)

    Choi, Yong Suk; Kim, Gyu Tae; Hwang, Eui Hwan; Lee, Min Ja; Choi, Sam Jin; Park, Hun Kuk; Park, Jeong Hoon

    2009-01-01

    The purpose of this study was to investigate how to detect proximal caries using line profile and validate linear measurements of proximal caries lesions by basic digital manipulation of radiographic images. The X-ray images of control group (15) and caries teeth (15) from patients were used. For each image, the line profile at the proximal caries-susceptible zone was calculated. To evaluate the contrast as a function of line profile to detect proximal caries, a difference coefficient (D) that indicates the relative difference between caries and sound dentin or intact enamel was measured. Mean values of D were 0.0354 ± 0.0155 in non-caries and 0.2632 ± 0.0982 in caries (p<0.001). The mean values of caries group were higher than non-caries group and there was correlation between proximal dental caries and D. It is demonstrated that the mean value of D from caries group was higher than that of control group. From the result, values of D possess great potentiality as a new detection parameter for proximal dental caries.

  10. Assessing the search for information on Three Rs methods, and their subsequent implementation: a national survey among scientists in the Netherlands.

    Science.gov (United States)

    van Luijk, Judith; Cuijpers, Yvonne; van der Vaart, Lilian; Leenaars, Marlies; Ritskes-Hoitinga, Merel

    2011-10-01

    A local survey conducted among scientists into the current practice of searching for information on Three Rs (i.e. Replacement, Reduction and Refinement) methods has highlighted the gap between the statutory requirement to apply Three Rs methods and the lack of criteria to search for them. To verify these findings on a national level, we conducted a survey among scientists throughout The Netherlands. Due to the low response rate, the results give an impression of opinions, rather than being representative of The Netherlands as a whole. The findings of both surveys complement each other, and indicate that there is room for improvement. Scientists perceive searching the literature for information on Three Rs methods to be a difficult task, and specific Three Rs search skills and knowledge of Three Rs databases are limited. Rather than using a literature search, many researchers obtain information on these methods through personal communication, which means that published information on possible Three Rs methods often remains unfound and unused. A solution might be to move beyond the direct search for information on Three Rs methods and choose another approach. One approach that seems rather appropriate is that of systematic review. This provides insight into the necessity for any new animal studies, as well as optimal implementation of available data and the prevention of unnecessary animal use in the future. 2011 FRAME.

  11. Line-feature-based calibration method of structured light plane parameters for robot hand-eye system

    Science.gov (United States)

    Qi, Yuhan; Jing, Fengshui; Tan, Min

    2013-03-01

    For monocular-structured light vision measurement, it is essential to calibrate the structured light plane parameters in addition to the camera intrinsic parameters. A line-feature-based calibration method of structured light plane parameters for a robot hand-eye system is proposed. Structured light stripes are selected as calibrating primitive elements, and the robot moves from one calibrating position to another with constraint in order that two misaligned stripe lines are generated. The images of stripe lines could then be captured by the camera fixed at the robot's end link. During calibration, the equations of two stripe lines in the camera coordinate system are calculated, and then the structured light plane could be determined. As the robot's motion may affect the effectiveness of calibration, so the robot's motion constraints are analyzed. A calibration experiment and two vision measurement experiments are implemented, and the results reveal that the calibration accuracy can meet the precision requirement of robot thick plate welding. Finally, analysis and discussion are provided to illustrate that the method has a high efficiency fit for industrial in-situ calibration.

  12. Large Neighborhood Search

    DEFF Research Database (Denmark)

    Pisinger, David; Røpke, Stefan

    2010-01-01

    Heuristics based on large neighborhood search have recently shown outstanding results in solving various transportation and scheduling problems. Large neighborhood search methods explore a complex neighborhood by use of heuristics. Using large neighborhoods makes it possible to find better...... candidate solutions in each iteration and hence traverse a more promising search path. Starting from the large neighborhood search method,we give an overview of very large scale neighborhood search methods and discuss recent variants and extensions like variable depth search and adaptive large neighborhood...

  13. Thermal Unit Commitment Scheduling Problem in Utility System by Tabu Search Embedded Genetic Algorithm Method

    Directory of Open Access Journals (Sweden)

    C. Christober Asir Rajan

    2008-06-01

    Full Text Available The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal unit commitment in the power system for the next H hours. A 66-bus utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 24, 57 and 175 buses. Numerical results are shown comparing the cost solutions and computation time obtained by different intelligence and conventional methods.

  14. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  15. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  16. Gamma-line intensity difference method for sup 1 sup 1 sup 7 sup m Sn at high resolution

    CERN Document Server

    Remeikis, V; Mazeika, K

    1998-01-01

    The method for detection of small differences in the gamma-spectrum line intensity for the radionuclide in different environments has been developed for measurements at high resolution. The experiments were realized with the pure germanium planar detector. Solution of the methodical problems allowed to measure the relative difference DELTA IOTA subgamma/IOTA subgamma=(3.4+-1.5)*10 sup - sup 4 of the sup 1 sup 1 sup 7 sup m Sn 156.02 keV gamma-line intensity for the radionuclide in SnO sub 2 with respect to SnS from the difference in the gamma-spectra. The error of the result is caused mainly by the statistical accuracy. It is limited by the highest counting rate at sufficiently high energy resolution and relatively short half-life of sup 1 sup 1 sup 7 sup m Sn. (author)

  17. Insulation Coordination of Arcing Horns on HVDC Electrode Lines: Protection Performance Evaluation, Influence Factors and Improvement Method

    Directory of Open Access Journals (Sweden)

    Xiandong Li

    2018-02-01

    Full Text Available Arcing horns are widely used in high voltage overhead lines to protect insulator strings from being destroyed by the free burning arcs caused by lightening faults. In this paper, we focus on the insulation coordination of arcing horns on the electrode lines of a 5000 MW, ±800 kV high voltage direct current (HVDC system. The protection performance of arcing horns are determined by the characteristics of not only the external system but also the fault arc. Therefore, the behaviors and characteristics of long free burning arcs are investigated by the experiments at first. In order to evaluate the protection performance of arcing horns, the static stability criterion U-I characteristic method is introduced. The influence factors on the protection performance of arcing horns are analyzed theoretically. Finally, the improvement methods for the protection performance of arcing horns are proposed, and the diversified configuration strategy of arcing horns is recommended for cost saving.

  18. The Terminal Responses of the Two-Wire Line in Multiaperture Cavities Based on Electromagnetic Topology and Method of Moments

    Directory of Open Access Journals (Sweden)

    Ying Li

    2011-01-01

    Full Text Available A simulation technique based on electromagnetic topology (EMT theory is proposed for analyzing electromagnetic interference (EMI coupling through apertures onto the two-transmission line enclosed within metallic structures. The electromagnetic interactions between apertures and the external-internal interactions were treated through the topological decomposition and the multistep iterative method. Then, the load responses of the two-wire transmission line are resolved by the the Baum-Liu-Tesche (BLT equation. The simulation results both without and with the electromagnetic interaction are presented for the frequency range from 100 MHz to 3 GHz. These numerical results obtained by two methods imply that the electromagnetic interaction cannot be simply ignored, especially for the frequency range up to 1 GHz.

  19. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  20. Electron temperature measurement by a helium line intensity ratio method in helicon plasmas

    International Nuclear Information System (INIS)

    Boivin, R.F.; Kline, J.L.; Scime, E.E.

    2001-01-01

    Electron temperature measurements in helicon plasmas are difficult. The presence of intense rf fields in the plasma complicates the interpretation of Langmuir probe measurements. Furthermore, the non-negligible ion temperature in the plasma considerably shortens the lifetime of conventional Langmuir probes. A spectroscopic technique based on the relative intensities of neutral helium lines is used to measure the electron temperature in the HELIX (Hot hELicon eXperiment) plasma [P. A. Keiter et al., Phys. Plasmas 4, 2741 (1997)]. This nonintrusive diagnostic is based on the fact that electron impact excitation rate coefficients for helium singlet and triplet states differ as a function of the electron temperature. The different aspects related to the validity of this technique to measure the electron temperature in rf generated plasmas are discussed in this paper. At low plasma density (n e ≤10 11 cm -3 ), this diagnostic is believed to be very reliable since the population of the emitting level can be easily estimated with reasonable accuracy by assuming that all excitation originates from the ground state (steady-state corona model). At higher density, secondary processes (excitation transfer, excitation from metastable, cascading) become more important and a more complex collisional radiative model must be used to predict the electron temperature. In this work, different helium transitions are examined and a suitable transition pair is identified. For an electron temperature of 10 eV, the line ratio is measured as a function of plasma density and compared to values predicted by models. The measured line ratio function is in good agreement with theory and the data suggest that the excitation transfer is the dominant secondary process in high-density plasmas