WorldWideScience

Sample records for based mini-prep method

  1. rapid mini-prep DNA extraction method in rice (Oryza sativa ...

    African Journals Online (AJOL)

    DNA extraction is an important step in molecular assays and plays a vital role in obtaining highresolution results in gel-based systems, particularly in the case of cereals with high content of interfering components in the early steps of DNA extraction. Here a rapid mini-prep DNA extraction method, optimized for rice, which ...

  2. A rapid mini-prep DNA extraction method in rice (Oryza sativa)

    African Journals Online (AJOL)

    STORAGESEVER

    2009-01-19

    Jan 19, 2009 ... extracted using the first method was sufficient for ampli- fication of relatively large DNA fragments, and it was used as the DNA template to amplify specific DNA from rice plants. To examine the presence of Rf1A gene (Akagi et al.,. 2004) in the rice genomic DNA, two different lines (one sterile line, Neda-A, ...

  3. Evaluating the Impact of DNA Extraction Method on the Representation of Human Oral Bacterial and Fungal Communities.

    Science.gov (United States)

    Vesty, Anna; Biswas, Kristi; Taylor, Michael W; Gear, Kim; Douglas, Richard G

    2017-01-01

    The application of high-throughput, next-generation sequencing technologies has greatly improved our understanding of the human oral microbiome. While deciphering this diverse microbial community using such approaches is more accurate than traditional culture-based methods, experimental bias introduced during critical steps such as DNA extraction may compromise the results obtained. Here, we systematically evaluate four commonly used microbial DNA extraction methods (MoBio PowerSoil® DNA Isolation Kit, QIAamp® DNA Mini Kit, Zymo Bacterial/Fungal DNA Mini PrepTM, phenol:chloroform-based DNA isolation) based on the following criteria: DNA quality and yield, and microbial community structure based on Illumina amplicon sequencing of the V3-V4 region of the 16S rRNA gene of bacteria and the internal transcribed spacer (ITS) 1 region of fungi. Our results indicate that DNA quality and yield varied significantly with DNA extraction method. Representation of bacterial genera in plaque and saliva samples did not significantly differ across DNA extraction methods and DNA extraction method showed no effect on the recovery of fungal genera from plaque. By contrast, fungal diversity from saliva was affected by DNA extraction method, suggesting that not all protocols are suitable to study the salivary mycobiome.

  4. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  5. Activity based costing (ABC Method

    Directory of Open Access Journals (Sweden)

    Prof. Ph.D. Saveta Tudorache

    2008-05-01

    Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.

  6. Activity – based costing method

    Directory of Open Access Journals (Sweden)

    Èuchranová Katarína

    2001-06-01

    Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.

  7. Methodical bases of geodemographic forecasting

    Directory of Open Access Journals (Sweden)

    Катерина Сегіда

    2016-10-01

    Full Text Available The article deals with methodological features of the forecast of population size and composition. The essence and features of probabilistic demographic forecasting, methods, a component and dynamic ranks are considered; requirements to initial indicators for each type of the forecast are provided. It is noted that geo-demographic forecast is an important component of regional geo-demographic characteristic. Features of the demographic forecast development by component method (recursors of age are given, basic formulae of calculation, including the equation of demographic balance, a formula recursors taking into account gender and age indicators, survival coefficient are presented. The basic methodical principles of the demographic forecast are given by an extrapolation method (dynamic ranks, calculation features by means of the generalized indicators, such as extrapolation on the basis of indicators of an average pure gain, average growth rate and average rate of a gain are presented. To develop population forecast, the method of retrospective extrapolation (for the short-term forecast and a component method (for the mid-term forecast are mostly used. The example of such development by component method for gender and age structure of the population of Kharkiv region with step-by-step explanation of calculation is provided. The example of Kharkiv region’s population forecast development is provided by the method of dynamic ranks. Having carried out calculations of the main forecast indicators by administrative units, it is possible to determine features of further regional demographic development, to reveal internal territorial distinctions in demographic development. Application of separate forecasting methods allows to develop the forecast for certain indicators, however essential a variety, nonlinearity and not stationarity of the processes constituting demographic development forces to look +for new approaches and

  8. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  9. An interactive segmentation method based on superpixel

    DEFF Research Database (Denmark)

    Yang, Shu; Zhu, Yaping; Wu, Xiaoyu

    2015-01-01

    This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....

  10. DOM Based XSS Detecting Method Based on Phantomjs

    Science.gov (United States)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  11. Discrete mechanics Based on Finite Element Methods

    OpenAIRE

    Chen, Jing-bo; Guo, Han-Ying; Wu, Ke

    2002-01-01

    Discrete Mechanics based on finite element methods is presented in this paper. We also explore the relationship between this discrete mechanics and Veselov discrete mechanics. High order discretizations are constructed in terms of high order interpolations.

  12. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  13. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  14. Interchange Recognition Method Based on CNN

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2018-03-01

    Full Text Available The identification and classification of interchange structures in OSM data can provide important information for the construction of multi-scale model, navigation and location services, congestion analysis, etc. The traditional method of interchange identification relies on the low-level characteristics of artificial design, and cannot distinguish the complex interchange structure with interference section effectively. In this paper, a new method based on convolutional neural network for identification of the interchange is proposed. The method combines vector data with raster image, and uses neural network to learn the fuzzy characteristics of the interchange, and classifies the complex interchange structure in OSM. Experiments show that this method has strong anti-interference, and has achieved good results in the classification of complex interchange shape, and there is room for further improvement with the expansion of the case base and the optimization of neural network model.

  15. Recommendation advertising method based on behavior retargeting

    Science.gov (United States)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  16. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  17. Arts-Based Methods in Education

    DEFF Research Database (Denmark)

    Chemi, Tatiana; Du, Xiangyun

    2017-01-01

    This chapter introduces the field of arts-based methods in education with a general theoretical perspective, reviewing the journey of learning in connection to the arts, and the contribution of the arts to societies from an educational perspective. Also presented is the rationale and structure...

  18. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  19. Deghosting based on the transmission matrix method

    Science.gov (United States)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong

    2017-12-01

    As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.

  20. Recognizing Cuneiform Signs Using Graph Based Methods

    OpenAIRE

    Kriege, Nils M.; Fey, Matthias; Fisseler, Denis; Mutzel, Petra; Weichert, Frank

    2018-01-01

    The cuneiform script constitutes one of the earliest systems of writing and is realized by wedge-shaped marks on clay tablets. A tremendous number of cuneiform tablets have already been discovered and are incrementally digitalized and made available to automated processing. As reading cuneiform script is still a manual task, we address the real-world application of recognizing cuneiform signs by two graph based methods with complementary runtime characteristics. We present a graph model for c...

  1. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  2. Chapter 11. Community analysis-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  3. Bus Based Synchronization Method for CHIPPER Based NoC

    Directory of Open Access Journals (Sweden)

    D. Muralidharan

    2016-01-01

    Full Text Available Network on Chip (NoC reduces the communication delay of System on Chip (SoC. The main limitation of NoC is power consumption and area overhead. Bufferless NoC reduces the area complexity and power consumption by eliminating buffers in the traditional routers. The bufferless NoC design should include live lock freeness since they use hot potato routing. This increases the complexity of bufferless NoC design. Among the available propositions to reduce this complexity, CHIPPER based bufferless NoC is considered as one of the best options. Live lock freeness is provided in CHIPPER through golden epoch and golden packet. All routers follow some synchronization method to identify a golden packet. Clock based method is intuitively followed for synchronization in CHIPPER based NoCs. It is shown in this work that the worst-case latency of packets is unbearably high when the above synchronization is followed. To alleviate this problem, broadcast bus NoC (BBus NoC approach is proposed in this work. The proposed method decreases the worst-case latency of packets by increasing the golden epoch rate of CHIPPER.

  4. A flocking based method for brain tractography.

    Science.gov (United States)

    Aranda, Ramon; Rivera, Mariano; Ramirez-Manzanares, Alonso

    2014-04-01

    We propose a new method to estimate axonal fiber pathways from Multiple Intra-Voxel Diffusion Orientations. Our method uses the multiple local orientation information for leading stochastic walks of particles. These stochastic particles are modeled with mass and thus they are subject to gravitational and inertial forces. As result, we obtain smooth, filtered and compact trajectory bundles. This gravitational interaction can be seen as a flocking behavior among particles that promotes better and robust axon fiber estimations because they use collective information to move. However, the stochastic walks may generate paths with low support (outliers), generally associated to incorrect brain connections. In order to eliminate the outlier pathways, we propose a filtering procedure based on principal component analysis and spectral clustering. The performance of the proposal is evaluated on Multiple Intra-Voxel Diffusion Orientations from two realistic numeric diffusion phantoms and a physical diffusion phantom. Additionally, we qualitatively demonstrate the performance on in vivo human brain data. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Filter-based reconstruction methods for tomography

    NARCIS (Netherlands)

    Pelt, D.M.

    2016-01-01

    In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally

  6. DNA-based methods of geochemical prospecting

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Matthew [Mill Valley, CA

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  7. Triptycene-based ladder monomers and polymers, methods of making each, and methods of use

    KAUST Repository

    Pinnau, Ingo

    2015-02-05

    Embodiments of the present disclosure provide for a triptycene-based A-B monomer, a method of making a triptycene-based A-B monomer, a triptycene-based ladder polymer, a method of making a triptycene-based ladder polymers, a method of using triptycene-based ladder polymers, a structure incorporating triptycene-based ladder polymers, a method of gas separation, and the like.

  8. Math-Based Simulation Tools and Methods

    National Research Council Canada - National Science Library

    Arepally, Sudhakar

    2007-01-01

    ...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...

  9. Method for sequencing DNA base pairs

    Science.gov (United States)

    Sessler, Andrew M.; Dawson, John

    1993-01-01

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.

  10. SDMS-based Disk Encryption Method

    OpenAIRE

    An, Dokjun; Ri, Myongchol; Choe, Changil; Han, Sunam; Kim, Yongmin

    2012-01-01

    We propose a disk encryption method, called secure disk mixed system (SDMS) in this paper, for data protection of disk storages such as USB flash memory, USB hard disk and CD/DVD. It is aimed to solve temporal and spatial limitation problems of existing disk encryption methods and to control security performance flexibly according to the security requirement of system. SDMS stores data by encrypting with different encryption key per sector and updates sector encryption keys each time data is ...

  11. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  12. Topology-Based Methods in Visualization 2015

    CERN Document Server

    Garth, Christoph; Weinkauf, Tino

    2017-01-01

    This book presents contributions on topics ranging from novel applications of topological analysis for particular problems, through studies of the effectiveness of modern topological methods, algorithmic improvements on existing methods, and parallel computation of topological structures, all the way to mathematical topologies not previously applied to data analysis. Topological methods are broadly recognized as valuable tools for analyzing the ever-increasing flood of data generated by simulation or acquisition. This is particularly the case in scientific visualization, where the data sets have long since surpassed the ability of the human mind to absorb every single byte of data. The biannual TopoInVis workshop has supported researchers in this area for a decade, and continues to serve as a vital forum for the presentation and discussion of novel results in applications in the area, creating a platform to disseminate knowledge about such implementations throughout and beyond the community. The present volum...

  13. A Tomographic method based on genetic algorithms

    International Nuclear Information System (INIS)

    Turcanu, C.; Alecu, L.; Craciunescu, T.; Niculae, C.

    1997-01-01

    Computerized tomography being a non-destructive and non-evasive technique is frequently used in medical application to generate three dimensional images of objects. Genetic algorithms are efficient, domain independent for a large variety of problems. The proposed method produces good quality reconstructions even in case of very small number of projection angles. It requests no a priori knowledge about the solution and takes into account the statistical uncertainties. The main drawback of the method is the amount of computer memory and time needed. (author)

  14. HMM-Based Gene Annotation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  15. Arts-based Methods and Organizational Learning

    DEFF Research Database (Denmark)

    This thematic volume explores the relationship between the arts and learning in various educational contexts and across cultures, but with a focus on higher education and organizational learning. Arts-based interventions are at the heart of this volume, which addresses how they are conceived, des...

  16. Refractive index measurement based on confocal method

    Science.gov (United States)

    An, Zhe; Xu, XiPing; Yang, JinHua; Qiao, Yang; Liu, Yang

    2017-10-01

    The development of transparent materials is closed to optoelectronic technology. It plays an increasingly important role in various fields. It is not only widely used in optical lens, optical element, optical fiber grating, optoelectronics, but also widely used in the building material, pharmaceutical industry with vessel, aircraft windshield and daily wear glasses.Regard of solving the problem of refractive index measurement in optical transparent materials. We proposed that using the polychromatic confocal method to measuring the refractive index of transparent materials. In this article, we describes the principle of polychromatic confocal method for measuring the refractive index of glass,and sketched the optical system and its optimization. Then we establish the measurement model of the refractive index, and set up the experimental system. In this way, the refractive index of the glass has been calibrated for refractive index experiment. Due to the error in the experimental process, we manipulated the experiment data to compensate the refractive index measurement formula. The experiment taking the quartz glass for instance. The measurement accuracy of the refractive index of the glass is +/-1.8×10-5. This method is more practical and accurate, especially suitable for non-contact measurement occasions, which environmental requirements is not high. Environmental requirements are not high, the ordinary glass production line up to the ambient temperature can be fully adapted. There is no need for the color of the measured object that you can measure the white and a variety of colored glass.

  17. Knowledge-based methods for control systems

    International Nuclear Information System (INIS)

    Larsson, J.E.

    1992-01-01

    This thesis consists of three projects which combine artificial intelligence and control. The first part describes an expert system interface for system identification, using the interactive identification program Idpac. The interface works as an intelligent help system, using the command spy strategy. It contains a multitude of help system ideas. The concept of scripts is introduced as a data structure used to describe the procedural part of the knowledge in the interface. Production rules are used to represent diagnostic knowledge. A small knowledge database of scripts and rules has been developed and an example run is shown. The second part describes an expert system for frequency response analysis. This is one of the oldest and most widely used methods to determine the dynamics of a stable linear system. Though quite simple, it requires knowledge and experience of the user, in order to produce reliable results. The expert system is designed to help the user in performing the analysis. It checks whether the system is linear, finds the frequency and amplitude ranges, verifies the results, and, if errors should occur, tries to give explanation and remedies for them. The third part describes three diagnostic methods for use with industrial processes. They are measurement validation, i.e., consistency checking of sensor and measurement values using any redundancy of instrumentation; alarm analysis, i.e. analysis of multiple alarm situations to find which alarms are directly connected to primary faults and which alarms are consequential effects of the primary ones; and fault diagnosis, i.e., a search for the causes of and remedies for faults. The three methods use multilevel flow models, (MFM), to describe the target process. They have been implemented in the programming tool G2, and successfully tested on two small processes. (164 refs.) (au)

  18. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    , as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  19. Limitations of correlation-based redatuming methods

    Science.gov (United States)

    Barrera P, D. F.; Schleicher, J.; van der Neut, J.

    2017-12-01

    Redatuming aims to correct seismic data for the consequences of an acquisition far from the target. That includes the effects of an irregular acquisition surface and of complex geological structures in the overburden such as strong lateral heterogeneities or layers with low or very high velocity. Interferometric techniques can be used to relocate sources to positions where only receivers are available and have been used to move acquisition geometries to the ocean bottom or transform data between surface–seismic and vertical seismic profiles. Even if no receivers are available at the new datum, the acquisition system can be relocated to any datum in the subsurface to which the propagation of waves can be modeled with sufficient accuracy. By correlating the modeled wavefield with seismic surface data, one can carry the seismic acquisition geometry from the surface closer to geologic horizons of interest. Specifically, we show the derivation and approximation of the one-sided seismic interferometry equation for surface-data redatuming, conveniently using Green’s theorem for the Helmholtz equation with density variation. Our numerical examples demonstrate that correlation-based single-boundary redatuming works perfectly in a homogeneous overburden. If the overburden is inhomogeneous, primary reflections from deeper interfaces are still repositioned with satisfactory accuracy. However, in this case artifacts are generated as a consequence of incorrectly redatumed overburden multiples. These artifacts get even worse if the complete wavefield is used instead of the direct wavefield. Therefore, we conclude that correlation-based interferometric redatuming of surface–seismic data should always be applied using direct waves only, which can be approximated with sufficient quality if a smooth velocity model for the overburden is available.

  20. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  1. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  2. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  3. Comparing Sound-based Sentence Method and Analysis Method in Literacy Education

    OpenAIRE

    H. İsmail ARSLANTAŞ; Mustafa CİNOĞLU

    2010-01-01

    In this study, sound-based sentence method and analysis method using in literacy education are compared based on teacher’s views. The study was conducted with 30 classroom teachers selected from 10 primary schools in Kilis province center in 2008-2009 education year. Survey method and informal interview were used for data collection. Descriptive analysis method was used for data analysis. Obtained data was grouped and interpreted. According to research findings, teachers approve that sound-ba...

  4. A Fusion Link Prediction Method Based on Limit Theorem

    Directory of Open Access Journals (Sweden)

    Yiteng Wu

    2017-12-01

    Full Text Available The theoretical limit of link prediction is a fundamental problem in this field. Taking the network structure as object to research this problem is the mainstream method. This paper proposes a new viewpoint that link prediction methods can be divided into single or combination methods, based on the way they derive the similarity matrix, and investigates whether there a theoretical limit exists for combination methods. We propose and prove necessary and sufficient conditions for the combination method to reach the theoretical limit. The limit theorem reveals the essence of combination method that is to estimate probability density functions of existing links and nonexistent links. Based on limit theorem, a new combination method, theoretical limit fusion (TLF method, is proposed. Simulations and experiments on real networks demonstrated that TLF method can achieve higher prediction accuracy.

  5. Qualitative Comparison of Contraction-Based Curve Skeletonization Methods

    NARCIS (Netherlands)

    Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.

    2013-01-01

    In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint

  6. Comparison of gas dehydration methods based on energy ...

    African Journals Online (AJOL)

    Comparison of gas dehydration methods based on energy consumption. ... This study compares three conventional methods of natural gas (Associated Natural Gas) dehydration to carry out the dehydration process and suitability of use on the basis of energy requirement. These methods are Triethylene Glycol (TEG) ...

  7. A power function method for estimating base flow.

    Science.gov (United States)

    Lott, Darline A; Stewart, Mark T

    2013-01-01

    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  8. Fusion Segmentation Method Based on Fuzzy Theory for Color Images

    Science.gov (United States)

    Zhao, J.; Huang, G.; Zhang, J.

    2017-09-01

    The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  9. The Base 32 Method: An Improved Method for Coding Sibling Constellations.

    Science.gov (United States)

    Perfetti, Lawrence J. Carpenter

    1990-01-01

    Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)

  10. Conceptual bases of the brand valuation by cost method

    Directory of Open Access Journals (Sweden)

    G.Y. Studinska

    2015-03-01

    Full Text Available The necessity of valuing intangible assets in accordance with international trends is substantiated. The brand is seen as more important component of intangible assets, as an effective management tool company. The benefits and uses of brand evaluation results are investigated. System monocriterion cost brand evaluation methods is analyzed. In particular, methods that require evaluation by the time factor (current and forecast methods and methods for factor comparison base (relative and absolute. The cost method of brand valuation through market transactions in accordance J.Common’s classification is considered in detail. The explanation of the difference between method a summation of all costs and method of brand valuation through market transactions is provided. The advantages and disadvantages considered cost method of brand valuation are investigated. The cost method as the relative-predicted of the brand valuation, «The method of determining the proportion of the brand from the discounted total costs» is grounded

  11. New robust face recognition methods based on linear regression.

    Directory of Open Access Journals (Sweden)

    Jian-Xun Mi

    Full Text Available Nearest subspace (NS classification based on linear regression technique is a very straightforward and efficient method for face recognition. A recently developed NS method, namely the linear regression-based classification (LRC, uses downsampled face images as features to perform face recognition. The basic assumption behind this kind method is that samples from a certain class lie on their own class-specific subspace. Since there are only few training samples for each individual class, which will cause the small sample size (SSS problem, this problem gives rise to misclassification of previous NS methods. In this paper, we propose two novel LRC methods using the idea that every class-specific subspace has its unique basis vectors. Thus, we consider that each class-specific subspace is spanned by two kinds of basis vectors which are the common basis vectors shared by many classes and the class-specific basis vectors owned by one class only. Based on this concept, two classification methods, namely robust LRC 1 and 2 (RLRC 1 and 2, are given to achieve more robust face recognition. Unlike some previous methods which need to extract class-specific basis vectors, the proposed methods are developed merely based on the existence of the class-specific basis vectors but without actually calculating them. Experiments on three well known face databases demonstrate very good performance of the new methods compared with other state-of-the-art methods.

  12. Islanding detection scheme based on adaptive identifier signal estimation method.

    Science.gov (United States)

    Bakhshi, M; Noroozian, R; Gharehpetian, G B

    2017-11-01

    This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Data Mining and Knowledge Discovery via Logic-Based Methods

    CERN Document Server

    Triantaphyllou, Evangelos

    2010-01-01

    There are many approaches to data mining and knowledge discovery (DM&KD), including neural networks, closest neighbor methods, and various statistical methods. This monograph, however, focuses on the development and use of a novel approach, based on mathematical logic, that the author and his research associates have worked on over the last 20 years. The methods presented in the book deal with key DM&KD issues in an intuitive manner and in a natural sequence. Compared to other DM&KD methods, those based on mathematical logic offer a direct and often intuitive approach for extracting easily int

  14. Enhancements to Graph based methods for Multi Document Summarization

    Directory of Open Access Journals (Sweden)

    Rengaramanujam Srinivasan

    2009-01-01

    Full Text Available This paper focuses its attention on extractivesummarization using popular graph based approaches. Graphbased methods can be broadly classified into two categories:non- PageRank type and PageRank type methods. Of themethods already proposed - the Centrality Degree methodbelongs to the former category while LexRank and ContinuousLexRank methods belong to later category. The paper goes on tosuggest two enhancements to both PageRank type and non-PageRank type methods. The first modification is that ofrecursively discounting the selected sentences, i.e. if a sentence isselected it is removed from further consideration and the nextsentence is selected based upon the contributions of theremaining sentences only. Next the paper suggests a method ofincorporating position weight to these schemes. In all 14methods –six of non- PageRank type and eight of PageRanktype have been investigated. To clearly distinguish betweenvarious schemes, we call the methods of incorporatingdiscounting and position weight enhancements over LexicalRank schemes as Sentence Rank (SR methods. Intrinsicevaluation of all the 14 graph based methods were done usingconventional Precision metric and metrics earlier proposed byus - Effectiveness1 (E1 and Effectiveness2 (E2. Experimentalstudy brings out that the proposed SR methods are superior toall the other methods.

  15. Model-Based Methods in the Biopharmaceutical Process Lifecycle.

    Science.gov (United States)

    Kroll, Paul; Hofer, Alexandra; Ulonska, Sophia; Kager, Julian; Herwig, Christoph

    2017-12-01

    Model-based methods are increasingly used in all areas of biopharmaceutical process technology. They can be applied in the field of experimental design, process characterization, process design, monitoring and control. Benefits of these methods are lower experimental effort, process transparency, clear rationality behind decisions and increased process robustness. The possibility of applying methods adopted from different scientific domains accelerates this trend further. In addition, model-based methods can help to implement regulatory requirements as suggested by recent Quality by Design and validation initiatives. The aim of this review is to give an overview of the state of the art of model-based methods, their applications, further challenges and possible solutions in the biopharmaceutical process life cycle. Today, despite these advantages, the potential of model-based methods is still not fully exhausted in bioprocess technology. This is due to a lack of (i) acceptance of the users, (ii) user-friendly tools provided by existing methods, (iii) implementation in existing process control systems and (iv) clear workflows to set up specific process models. We propose that model-based methods be applied throughout the lifecycle of a biopharmaceutical process, starting with the set-up of a process model, which is used for monitoring and control of process parameters, and ending with continuous and iterative process improvement via data mining techniques.

  16. Multivariate Methods Based Soft Measurement for Wine Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Shen Yin

    2014-01-01

    a decision. However, since the physicochemical indexes of wine can to some extent reflect the quality of wine, the multivariate statistical methods based soft measure can help the oenologist in wine evaluation.

  17. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  18. Agile Service Development: A Rule-Based Method Engineering Approach

    NARCIS (Netherlands)

    dr. Martijn Zoet; Stijn Hoppenbrouwers; Inge van de Weerd; Johan Versendaal

    2011-01-01

    Agile software development has evolved into an increasingly mature software development approach and has been applied successfully in many software vendors’ development departments. In this position paper, we address the broader agile service development. Based on method engineering principles we

  19. Base oils and methods for making the same

    Science.gov (United States)

    Ohler, Nicholas; Fisher, Karl; Tirmizi, Shakeel

    2018-01-09

    Provided herein are isoparaffins derived from hydrocarbon terpenes such as myrcene, ocimene and farnesene, and methods for making the same. In certain variations, the isoparaffins have utility as lubricant base stocks.

  20. A method for estimating fetal weight based on body composition.

    Science.gov (United States)

    Bo, Chen; Jie, Yu; Xiu-E, Gao; Gui-Chuan, Fan; Wen-Long, Zhang

    2018-04-02

    Fetal weight is an important factor to determine the delivery mode of pregnant women. The change of fetal weight is significant, according to regular health monitoring of pregnant women. Conventional methods of fetal weight estimation, namely those based on B-ultrasound, are very complicated and the costs are high. In this paper, we propose a new method based on body composition. An abdominal four-segment impedance model is first established upon pregnant women, as well as the method of calculation. A body composition based method is then given to estimate the fetal weight, with the solution given explicitly. Analyses of clinical data reveal the smallness of the error between the estimated value and the actual value. The error between B-ultrasound and the present method is less than 15%.

  1. Peer Tutoring with QUICK Method vs. Task Based Method on Reading Comprehension Achievement

    Directory of Open Access Journals (Sweden)

    Sri Indrawati

    2017-07-01

    Full Text Available This study is a quasi-experimental research analyzing the reading comprehension achievement of the eleventh graders of Senior High School in Surabaya. This experimental research is comparing the effects of peer tutoring with QUICK method and task-based method to help the students to increase the students’ reading achievement. Besides for increasing the students’ reading achievement, this study has the main purpose to give a variation in teacher’s teaching reading techniques. This study uses independent samples t-test and paired samples t-test to indicate the students’ significant difference in achieving the reading comprehension in peer tutoring with QUICK method and task based method. Keywords: Peer tutoring with QUICK method, Task-based method, T-test, Reading achievement

  2. [A new calibration transfer method based on target factor analysis].

    Science.gov (United States)

    Wang, Yan-bin; Yuan, Hong-fu; Lu, Wan-zhen

    2005-03-01

    A new calibration transfer method based on target factor analysis is proposed.The performance of the new method compared with the piecewise direct standardization method. This method was applied to two data sets, of which one is a simulation data set, and the other is an NIR data set composed of benzene, toluene, xylene and isooctane. The results obtained with this new method are at least as well as those obtained by PDS with the biggest improvement occurring when the spectra have some non-linear responses.

  3. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  4. Structural Topology Optimization Based on the Smoothed Finite Element Method

    Directory of Open Access Journals (Sweden)

    Vahid Shobeiri

    Full Text Available Abstract In this paper, the smoothed finite element method, incorporated with the level set method, is employed to carry out the topology optimization of continuum structures. The structural compliance is minimized subject to a constraint on the weight of material used. The cell-based smoothed finite element method is employed to improve the accuracy and stability of the standard finite element method. Several numerical examples are presented to prove the validity and utility of the proposed method. The obtained results are compared with those obtained by several standard finite element-based examples in order to access the applicability and effectiveness of the proposed method. The common numerical instabilities of the structural topology optimization problems such as checkerboard pattern and mesh dependency are studied in the examples.

  5. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  6. Tomographs based on non-conventional radiation sources and methods

    International Nuclear Information System (INIS)

    Barbuzza, R.; Fresno, M. del; Venere, Marcelo J.; Clausse, Alejandro; Moreno, C.

    2000-01-01

    Computer techniques for tomographic reconstruction of objects X-rayed with a compact plasma focus (PF) are presented. The implemented reconstruction algorithms are based on stochastic searching of solutions of Radon equation, using Genetic Algorithms and Monte Carlo methods. Numerical experiments using actual projections were performed concluding the feasibility of the application of both methods in tomographic reconstruction problem. (author)

  7. The afforestation problem: a heuristic method based on simulated annealing

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1992-01-01

    This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....

  8. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    The harmonics detection method based on neural network applied to harmonics compensation. R Dehini, A Bassou, B Ferdi. Abstract. Several different methods have been used to sense load currents and extract its harmonic component in order to produce a reference current in shunt active power filters (SAPF), and to ...

  9. Implementation of an office-based semen preparation method (SEP ...

    African Journals Online (AJOL)

    Implementation of an office-based semen preparation method (SEP-D Kit) for intra-uterine insemination (IUI): A controlled randomised study to compare the IUI pregnancy outcome between a routine (swim-up) and the SEP-D Kit method.

  10. Qualitative Assessment of Inquiry-Based Teaching Methods

    Science.gov (United States)

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  11. DNA based methods used for characterization and detection of food ...

    African Journals Online (AJOL)

    Detection of food borne pathogen is of outmost importance in the food industries and related agencies. For the last few decades conventional methods were used to detect food borne pathogens based on phenotypic characters. At the advent of complementary base pairing and amplification of DNA, the diagnosis of food ...

  12. Differential evolution based method for total transfer capability ...

    African Journals Online (AJOL)

    user

    This algorithm is based on full ac optimal power flow solution to account for the effects of active and reactive power ... and airlines. This stimulates the restructuring of electric power sector also. ... The CPF method traces the power flow solution curve, starting at a base load, leading to the steady state voltage stability limit or.

  13. [Synchrotron-based characterization methods applied to ancient materials (I)].

    Science.gov (United States)

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences.

  14. Human Detection System by Fusing Depth Map-Based Method and Convolutional Neural Network-Based Method

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2017-01-01

    Full Text Available In this paper, the depth images and the colour images provided by Kinect sensors are used to enhance the accuracy of human detection. The depth-based human detection method is fast but less accurate. On the other hand, the faster region convolutional neural network-based human detection method is accurate but requires a rather complex hardware configuration. To simultaneously leverage the advantages and relieve the drawbacks of each method, one master and one client system is proposed. The final goal is to make a novel Robot Operation System (ROS-based Perception Sensor Network (PSN system, which is more accurate and ready for the real time application. The experimental results demonstrate the outperforming of the proposed method compared with other conventional methods in the challenging scenarios.

  15. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    OpenAIRE

    Aminifar, Sadegh; bin Marzuki, Arjuna

    2013-01-01

    Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal...

  16. An Entropy-Based Network Anomaly Detection Method

    Directory of Open Access Journals (Sweden)

    Przemysław Bereziński

    2015-04-01

    Full Text Available Data mining is an interdisciplinary subfield of computer science involving methods at the intersection of artificial intelligence, machine learning and statistics. One of the data mining tasks is anomaly detection which is the analysis of large quantities of data to identify items, events or observations which do not conform to an expected pattern. Anomaly detection is applicable in a variety of domains, e.g., fraud detection, fault detection, system health monitoring but this article focuses on application of anomaly detection in the field of network intrusion detection.The main goal of the article is to prove that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. This aim is achieved by realization of the following points: (i preparation of a concept of original entropy-based network anomaly detection method, (ii implementation of the method, (iii preparation of original dataset, (iv evaluation of the method.

  17. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  18. Key Updating Methods for Combinatorial Design Based Key Management Schemes

    Directory of Open Access Journals (Sweden)

    Chonghuan Xu

    2014-01-01

    Full Text Available Wireless sensor network (WSN has become one of the most promising network technologies for many useful applications. However, for the lack of resources, it is different but important to ensure the security of the WSNs. Key management is a corner stone on which to build secure WSNs for it has a fundamental role in confidentiality, authentication, and so on. Combinatorial design theory has been used to generate good-designed key rings for each sensor node in WSNs. A large number of combinatorial design based key management schemes have been proposed but none of them have taken key updating into consideration. In this paper, we point out the essence of key updating for the unital design based key management scheme and propose two key updating methods; then, we conduct performance analysis on the two methods from three aspects; at last, we generalize the two methods to other combinatorial design based key management schemes and enhance the second method.

  19. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  20. An overview of modal-based damage identification methods

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, C.R.; Doebling, S.W. [Los Alamos National Lab., NM (United States). Engineering Analysis Group

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  1. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  2. [Model transfer method based on support vector machine].

    Science.gov (United States)

    Xiong, Yu-hong; Wen, Zhi-yu; Liang, Yu-qian; Chen, Qin; Zhang, Bo; Liu, Yu; Xiang, Xian-yi

    2007-01-01

    The model transfer is a basic method to build up universal and comparable performance of spectrometer data by seeking a mathematical transformation relation among different spectrometers. Because of nonlinear effect and small calibration sample set in fact, it is important to solve the problem of model transfer under the condition of nonlinear effect in evidence and small sample set. This paper summarizes support vector machines theory, puts forward the method of model transfer based on support vector machine and piecewise direct standardization, and makes use of computer simulation method, giving a example to explain the method and compare it with artificial neural network in the end.

  3. An Input Shaping Method Based on System Output

    Directory of Open Access Journals (Sweden)

    Zhiqiang ZHU

    2014-06-01

    Full Text Available In this paper, an input shaping method is proposed. This method only requires the system output, and doesn't need the system model information. In the application of this method, the basic form of an input shaping filter is first decided, then according to the form of the filter, the system output is decomposed into several weighted signals. Based on the decomposition, the least square method is applied to minimize the difference between the actual system output and the reference system output. In this way, the vibration in the system output is eliminated and the desired bandwidth of the whole system can be fulfilled.

  4. Energy-Based Acoustic Source Localization Methods: A Survey

    Directory of Open Access Journals (Sweden)

    Wei Meng

    2017-02-01

    Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  5. A hybrid method for pancreas extraction from CT image based on level set methods.

    Science.gov (United States)

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  6. Constructing financial network based on PMFG and threshold method

    Science.gov (United States)

    Nie, Chun-Xiao; Song, Fu-Tie

    2018-04-01

    Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.

  7. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    OpenAIRE

    Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko

    2011-01-01

    Abstract Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calc...

  8. A novel method of S-box design based on chaotic map and composition method

    International Nuclear Information System (INIS)

    Lambić, Dragan

    2014-01-01

    Highlights: • Novel chaotic S-box generation method is presented. • Presented S-box has better cryptographic properties than other examples of chaotic S-boxes. • The advantages of the proposed method are the low complexity and large key space. -- Abstract: An efficient algorithm for obtaining random bijective S-boxes based on chaotic maps and composition method is presented. The proposed method is based on compositions of S-boxes from a fixed starting set. The sequence of the indices of starting S-boxes used is obtained by using chaotic maps. The results of performance test show that the S-box presented in this paper has good cryptographic properties. The advantages of the proposed method are the low complexity and the possibility to achieve large key space

  9. Three Methods for Occupation Coding Based on Statistical Learning

    Directory of Open Access Journals (Sweden)

    Gweon Hyukjun

    2017-03-01

    Full Text Available Occupation coding, an important task in official statistics, refers to coding a respondent’s text answer into one of many hundreds of occupation codes. To date, occupation coding is still at least partially conducted manually, at great expense. We propose three methods for automatic coding: combining separate models for the detailed occupation codes and for aggregate occupation codes, a hybrid method that combines a duplicate-based approach with a statistical learning algorithm, and a modified nearest neighbor approach. Using data from the German General Social Survey (ALLBUS, we show that the proposed methods improve on both the coding accuracy of the underlying statistical learning algorithm and the coding accuracy of duplicates where duplicates exist. Further, we find defining duplicates based on ngram variables (a concept from text mining is preferable to one based on exact string matches.

  10. Therapy Decision Support Based on Recommender System Methods

    Directory of Open Access Journals (Sweden)

    Felix Gräßer

    2017-01-01

    Full Text Available We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.

  11. Therapy Decision Support Based on Recommender System Methods.

    Science.gov (United States)

    Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen; Zaunseder, Sebastian

    2017-01-01

    We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender , are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.

  12. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  13. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted....

  14. Logic-based aggregation methods for ranking student applicants

    Directory of Open Access Journals (Sweden)

    Milošević Pavle

    2017-01-01

    Full Text Available In this paper, we present logic-based aggregation models used for ranking student applicants and we compare them with a number of existing aggregation methods, each more complex than the previous one. The proposed models aim to include depen- dencies in the data using Logical aggregation (LA. LA is a aggregation method based on interpolative Boolean algebra (IBA, a consistent multi-valued realization of Boolean algebra. This technique is used for a Boolean consistent aggregation of attributes that are logically dependent. The comparison is performed in the case of student applicants for master programs at the University of Belgrade. We have shown that LA has some advantages over other presented aggregation methods. The software realization of all applied aggregation methods is also provided. This paper may be of interest not only for student ranking, but also for similar problems of ranking people e.g. employees, team members, etc.

  15. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  16. Fuzzy-Based XML Knowledge Retrieval Methods in Edaphology

    OpenAIRE

    K. Naresh kumar; Ch. Satyanand Reddy; N.V.E.S. Murthy

    2016-01-01

    In this paper, we propose a proficient method for knowledge management in Edaphology to assist the edaphologists and those related with agriculture in a big way. The proposed method mainly consists two sections of which the first one is to build the knowledge base using XML and the latter part deals with information retrieval by searching using fuzzy. Initially, the relational database is converted to the XML database. The paper discusses two algorithms, one is...

  17. Managerial Methods Based on Analysis, Recommended to a Boarding House

    Directory of Open Access Journals (Sweden)

    Solomia Andreş

    2015-06-01

    Full Text Available The paper presents a few theoretical and practical contributions regarding the implementing of analysis based methods, respectively a SWOT and an economic analysis, from the perspective and the demands of a firm management which functions with profits due to the activity of a boarding house. The two types of managerial methods recommended to the firm offer real and complex information necessary for the knowledge of the firm status and the elaboration of prediction for the maintaining of business viability.

  18. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  19. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  20. Control method for biped locomotion robots based on ZMP information

    Energy Technology Data Exchange (ETDEWEB)

    Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1994-01-01

    The Human Acts Simulation Program (HASP) started as a ten year program of Computing and Information Systems Center (CISC) at Japan Atomic Energy Research Institute (JAERI) in 1987. A mechanical design study of biped locomotion robots for patrol and inspection in nuclear facilities is being performed as an item of the research scope. One of the goals of our research is to design a biped locomotion robot for practical use in nuclear facilities. So far, we have been studying for several dynamic walking patterns. In conventional control methods for biped locomotion robots, the program control is used based on preset walking patterns, so it dose not have the robustness such as a dynamic change of walking pattern. Therefore, a real-time control method based on dynamic information of the robot states is necessary for the high performance of walking. In this study a new control method based on Zero Moment Point (ZMP) information is proposed as one of real-time control methods. The proposed method is discussed and validated based on the numerical simulation. (author).

  1. Method of infrared image enhancement based on histogram

    Science.gov (United States)

    Wang, Liang; Yan, Jie

    2011-05-01

    Aiming at the problem in infrared image enhancement, a new method is given based on histogram. Using the gray characteristics of target, the upper-bound threshold is selected adaptively and the histogram is processed by the threshold. After choosing the gray transform function based on the gray level distribution of image, the gray transformation is done during histogram equalization. Finally, the enhanced image is obtained. Compared with histogram equalization (HE), histogram double equalization (HDE) and plateau histogram equalization (PE), the simulation results demonstrate that the image enhancement effect of this method has obvious superiority. At the same time, its operation speed is fast and real-time ability is excellent.

  2. Future Perspectives for Arts-Based Methods in Higher Education

    DEFF Research Database (Denmark)

    Chemi, Tatiana; Du, Xiangyun

    2018-01-01

    This chapter presents the concluding remarks for the collected contribution. Having traced multiple theoretical, empirical and practical implications for the arts-based methods in higher education and organisations, the different chapters have, on the one hand, shed an original light on specific...... conversations between scholars and educators are needed, and that artists have a central role in the future developments of this field. Whether professional or amateur artists is no matter, but the craft and creativity of art practices in the flesh must lead any future direction of arts-based methods....

  3. Quartet-based methods to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng

    2014-02-20

    Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.

  4. Ontology-Based Method for Fault Diagnosis of Loaders.

    Science.gov (United States)

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  5. A online credit evaluation method based on AHP and SPA

    Science.gov (United States)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  6. Measurement of unattached radon progeny based in electrostatic deposition method

    International Nuclear Information System (INIS)

    Canoba, A.C.; Lopez, F.O.

    1999-01-01

    A method for the measurement of unattached radon progeny based on its electrostatic deposition onto wire screens, using only one pump, has been implemented and calibrated. The importance of being able of making use of this method is related with the special radiological significance that has the unattached fraction of the short-lived radon progeny. Because of this, the assessment of exposure could be directly related to dose with far greater accuracy than before. The advantages of this method are its simplicity, even with the tools needed for the sample collection, as well as the measurement instruments used. Also, the suitability of this method is enhanced by the fact that it can effectively be used with a simple measuring procedure such as the Kusnetz method. (author)

  7. Fast Radioactive Nuclide Recognition Method Study Based on Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Yonggang Huo

    2014-01-01

    Full Text Available Based on pattern recognition method, applied the nuclear radiation digital measurement and analysis system platform, through synthetically making use of the radioactive nuclide’s ray information, selected radiation characteristic information of the radioactive nuclide, established the characteristic arrays database of radioactive nuclides, the recognition method is designed and applied to the identification of radionuclide radiation while using middle or low-resolution detector in this paper. Verified by experiments, when the count value of the traditional low-resolution spectrometer system is not reach single full energy peak’s statistical lower limit value, the three kinds of mixed radioactive nuclides’ true discrimination rate reached more than 90 % in the digital measurement and analysis system using fast radionuclide recognition method. The results show that this method is obviously superior to the traditional methods, and effectively improve the rapid identification ability to radioactive nuclide.

  8. Study on UPF Harmonic Current Detection Method Based on DSP

    International Nuclear Information System (INIS)

    Zhao, H J; Pang, Y F; Qiu, Z M; Chen, M

    2006-01-01

    Unity power factor (UPF) harmonic current detection method applied to active power filter (APF) is presented in this paper. The intention of this method is to make nonlinear loads and active power filter in parallel to be an equivalent resistance. So after compensation, source current is sinusoidal, and has the same shape of source voltage. Meanwhile, there is no harmonic in source current, and the power factor becomes one. The mathematic model of proposed method and the optimum project for equivalent low pass filter in measurement are presented. Finally, the proposed detection method applied to a shunt active power filter experimental prototype based on DSP TMS320F2812 is developed. Simulation and experiment results indicate the method is simple and easy to implement, and can obtain the real-time calculation of harmonic current exactly

  9. [Segmentation Method for Liver Organ Based on Image Sequence Context].

    Science.gov (United States)

    Zhang, Meiyun; Fang, Bin; Wang, Yi; Zhong, Nanchang

    2015-10-01

    In view of the problems of more artificial interventions and segmentation defects in existing two-dimensional segmentation methods and abnormal liver segmentation errors in three-dimensional segmentation methods, this paper presents a semi-automatic liver organ segmentation method based on the image sequence context. The method takes advantage of the existing similarity between the image sequence contexts of the prior knowledge of liver organs, and combines region growing and level set method to carry out semi-automatic segmentation of livers, along with the aid of a small amount of manual intervention to deal with liver mutation situations. The experiment results showed that the liver segmentation algorithm presented in this paper had a high precision, and a good segmentation effect on livers which have greater variability, and can meet clinical application demands quite well.

  10. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  11. DECISION SUPPORT SYSTEMS BASED ON QUALITATIVE METHODS FOR PROJECT MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Олег Николаевич ГУЦА

    2017-03-01

    Full Text Available State-of-the-art decision support systems (DSS used in project management are mainly based on quantitative methods. However, formal methods of modern mathematics alone are not capable of being a universal means of solving all practical problems in this area. Due to their limited capabilities and lack of statistical and other relevant information, economic-mathematical methods find limited application in management and marketing. In addition, there are few reliable validation and verification methods available. On the other hand, expert assessment methods are free of these disadvantages and are almost the only way to solve this type of problem. Advantages of this approach include simplicity of prediction in nearly every case and excellent performance in incomplete information scenarios. This work presents a new information technology which generates a DSS, based on qualitative methods of verbal decision analysis. The authors propose certain modifications to the method of ordinary classification. The proposed technology is implemented as a web application, which is used to design a system that evaluates the probability of a successful project.

  12. SPEAKERS' IDENTIFICATION METHOD BASED ON COMPARISON OF PHONEME LENGTHS STATISTICS

    Directory of Open Access Journals (Sweden)

    E. V. Bulgakova

    2015-01-01

    Full Text Available Subject of research. The paper presents a semi-automatic method of speaker identification based on prosodic features comparison - statistics of phone lengths. Due to the development of speech technologies in recent times, there is an increased interest in searching of expert methods for speaker's voice identification, which supplement existing methods to increase identification reliability and also have low labour intensity. An efficient solution for this problem is necessary for making the reliable decision whether the voices of the speakers in the audio recordings are identical or different. Method description. We present a novel algorithm for calculating the difference of speakers’ voices based on comparing of statistics for phone and allophone lengths. Characteristic feature of the proposed method is the possibility of its application along with the other semi-automatic methods (acoustic, auditive and linguistic due to the lack of a strong correlation between analyzed features. The advantage of the method is the possibility to carry out rapid analysis of long-duration recordings because of preprocessing automation for data being analyzed. We describe the operation principles of an automatic speech segmentation module used for statistics calculation of sound lengths by acoustic-phonetic labeling. The software has been developed as an instrument of speech data preprocessing for expert analysis. Method approbation. This method was approved on the speech database of 130 speech records, including the Russian speech of the male speakers and female speakers, and showed reliability equal to 71.7% on the database containing female speech records, and 78.4% on the database containing male speech records. Also it was experimentally established that the most informative of all used features are statistics of phone lengths of vowels and sonorant sounds. Practical relevance. Experimental results have shown applicability of the proposed method for the

  13. Method of coating an iron-based article

    Science.gov (United States)

    Magdefrau, Neal; Beals, James T.; Sun, Ellen Y.; Yamanis, Jean

    2016-11-29

    A method of coating an iron-based article includes a first heating step of heating a substrate that includes an iron-based material in the presence of an aluminum source material and halide diffusion activator. The heating is conducted in a substantially non-oxidizing environment, to cause the formation of an aluminum-rich layer in the iron-based material. In a second heating step, the substrate that has the aluminum-rich layer is heated in an oxidizing environment to oxidize the aluminum in the aluminum-rich layer.

  14. Lesson learned - CGID based on the Method 1 and Method 2 for digital equipment

    International Nuclear Information System (INIS)

    Hwang, Wonil; Sohn, Kwang Young; Cho, Chang Hwan; Kim, Sung Jong

    2015-01-01

    The acceptance methods associated with commercial-grade dedication are the following: 1) Special tests and inspection (Method 1) 2) Commercial-grade surveys (Method 2) 3) Source verification (Method 3) 4) An acceptable item and supplier performance record (Method 4) Special tests and inspections, often referred to as Method 1, are performed by the dedicating entity after the item is received to verify selected critical characteristics. Conducting a commercial-grade survey of a supplier is often referred to as Method 2. Supplier audits to verify compliance with a nuclear QA program do not meet the intent of a commercial-grade survey. Source verification, often referred to as Method 3, entails verification of critical characteristics during manufacture and testing of the item being procured. The performance history (good or bad) of the item and supplier is a consideration when determining the use of the other acceptance methods and the rigor with which they are used on a case-by-case basis. Some digital equipment system has the delivery reference and its operating history for Nuclear Power Plant as far as surveyed. However it was found that there is difficulty in collecting this of supporting data sheet, so that supplier usually decide to conduct the CGID based on the Method-1 and Method-2 based on the initial qualification likely. It is conceived that the Method-4 might be a better approach for CGID(Commercial Grade Item Dedication) even if there are some difficulties in data package for justifying CGID from the vendor and operating organization. This paper present the lesson learned from the consulting for Method-1 and 2 for digital equipment dedication. Considering all the information above, there are a couple of issues to remind in order to perform the CGID for Method-2. In doing commercial grade survey based on Method 2, quality personnel as well as technical engineer shall be involved for integral dedication. Other than this, the review of critical

  15. Investigation of forming method based on flanging process

    Science.gov (United States)

    Demyanenko, E. G.; Popov, I. P.

    2017-10-01

    In this paper the new method of forming based on flanging is investigated using computer simulation. The obtained results supported the theoretical conclusions and allowed us to create a mathematical model, to define geometric dimensions of a blank for forming of the thin axisymmetric parts with minimal thickness variation. This analysis serves as a foundation for further design of the technological process.

  16. Graph-Based Methods for Discovery Browsing with Semantic Predications

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Fiszman, Marcelo; Miller, Christopher M

    2011-01-01

    We present an extension to literature-based discovery that goes beyond making discoveries to a principled way of navigating through selected aspects of some biomedical domain. The method is a type of "discovery browsing" that guides the user through the research literature on a specified phenomen...

  17. A novel stepwise support vector machine (SVM) method based on ...

    African Journals Online (AJOL)

    ajl yemi

    2011-11-23

    Nov 23, 2011 ... began to use computational approaches, particularly machine learning methods to identify pre-miRNAs (Xue et al., 2005; Huang et al., 2007; Jiang et al., 2007). Xue et al. (2005) presented a support vector machine (SVM)- based classifier called triplet-SVM, which classifies human pre-miRNAs from pseudo ...

  18. Heart rate-based lactate minimum test: a reproducible method.

    NARCIS (Netherlands)

    Strupler, M.; Muller, G.; Perret, C.

    2009-01-01

    OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The

  19. Influence of crossover methods used by genetic algorithm-based ...

    Indian Academy of Sciences (India)

    Influence of crossover methods used by genetic algorithm-based heuristic to solve the selective harmonic equations (SHE) in multi-level voltage source inverter. SANGEETHA S1,∗ and S JEEVANANTHAN2. 1Department of Electrical and Electronics Engineering, Jawaharlal Nehru. Technological University, Hyderabad 500 ...

  20. Preparing Students for Flipped or Team-Based Learning Methods

    Science.gov (United States)

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  1. Generating objects: a method bases on documents and scenarios ...

    African Journals Online (AJOL)

    This paper proposes an original method based on Froms and Scenarios analysis in the information systems( IS) Engineering domain with advantage of producing a conceptual object- oriented schema of the future IS. The advantages of our suggested approach consists in using simple elements (forms, information flowchart ...

  2. Ambulatory and Hospital-based Quality Improvement Methods in Israel

    Directory of Open Access Journals (Sweden)

    Nava Blum

    2014-01-01

    Full Text Available This review article compares ambulatory and hospital-based quality improvement methods in Israel. Data were collected from: reports of the National Program for Quality Indicators in community, the National Program for Quality Indicators in Hospitals, and from the Organization for Economic Cooperation and Development (OECD Reviews of Health Care Quality.

  3. Influence of crossover methods used by genetic algorithm-based ...

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana; Volume 40; Issue 8. Influence of crossover methods used by genetic algorithm-based heuristic to solve the selective harmonic ... Genetic Algorithms (GA) has always done justice to the art of optimization. One such endeavor has been made in employing the roots of GA in a most proficient way to ...

  4. Planning of operation & maintenance using risk and reliability based methods

    DEFF Research Database (Denmark)

    Florian, Mihai; Sørensen, John Dalsgaard

    2015-01-01

    Operation and maintenance (OM) of offshore wind turbines contributes with a substantial part of the total levelized cost of energy (LCOE). The objective of this paper is to present an application of risk- and reliability-based methods for planning of OM. The theoretical basis is presented...

  5. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    An adaptive image denoising method based on local parameters optimization. 881 the computations and the directional decomposition is done using the directional filter banks. (DFB). Then, the Donoho and Johnstone's threshold is used to modify the coefficients, which in turn provide the noise-free image on applying the ...

  6. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    Directory of Open Access Journals (Sweden)

    Sadegh Aminifar

    2013-01-01

    Full Text Available Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal and vertical rule bases method has a great roll in easing of extracting the optimum control surface by using too lesser rules than traditional fuzzy systems. This research involves with control of a system with high nonlinearity and in difficulty to model it with classical methods. As a case study for testing proposed method in real condition, the designed controller is applied to steaming room with uncertain data and variable parameters. A comparison between PID and traditional fuzzy counterpart and our proposed system shows that our proposed system outperforms PID and traditional fuzzy systems in point of view of number of valve switching and better surface following. The evaluations have done both with model simulation and DSP implementation.

  7. Homotopy-based methods for fractional differential equations

    NARCIS (Netherlands)

    Ates, I.

    2017-01-01

    The intention of this thesis is two-fold. The first aim is to describe and apply, series-based, numerical methods to fractional differential equation models. For this, it is needed to distinguish between space-fractional and time-fractional derivatives. The second goal of this thesis is to give a

  8. A Natural Teaching Method Based on Learning Theory.

    Science.gov (United States)

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  9. Research Methods for Assessing and Evaluating School-Based Clinics.

    Science.gov (United States)

    Kirby, Douglas

    This monograph describes three types of evaluation that are potentially useful to school-based clinics: needs assessments, process evaluations, and impact evaluations. Two important methodological principles are involved: (1) collecting multiple kinds of data with multiple methods; and (2) collecting comparison data. Student needs can be…

  10. Bead Collage: An Arts-Based Research Method

    Science.gov (United States)

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  11. Dynamic Frames Based Verification Method for Concurrent Java Programs

    NARCIS (Netherlands)

    Mostowski, Wojciech

    2016-01-01

    In this paper we discuss a verification method for concurrent Java programs based on the concept of dynamic frames. We build on our earlier work that proposes a new, symbolic permission system for concurrent reasoning and we provide the following new contributions. First, we describe our approach

  12. Algorithm for Concrete Mix Design Based on British Method | Ezeh ...

    African Journals Online (AJOL)

    The results obtained from the algorithm were compared with those obtained based on the British method and the differences between them were found to be less than 10% in each example. Hence, the algorithm developed in this paper is working with minimum error. It is recommended for use in obtaining good results for ...

  13. A Quantum-Based Similarity Method in Virtual Screening

    Directory of Open Access Journals (Sweden)

    Mohammed Mumtaz Al-Dabbagh

    2015-10-01

    Full Text Available One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB. The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR, maximum unbiased validation (MUV and Directory of Useful Decoys (DUD data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  14. Peer Tutoring with QUICK Method vs. Task Based Method on Reading Comprehension Achievement

    OpenAIRE

    Sri Indrawati

    2017-01-01

    This study is a quasi-experimental research analyzing the reading comprehension achievement of the eleventh graders of Senior High School in Surabaya. This experimental research is comparing the effects of peer tutoring with QUICK method and task-based method to help the students to increase the students’ reading achievement. Besides for increasing the students’ reading achievement, this study has the main purpose to give a variation in teacher’s teaching reading techniques. This study uses i...

  15. Fast Reduction Method in Dominance-Based Information Systems

    Science.gov (United States)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  16. Biogas slurry pricing method based on nutrient content

    Science.gov (United States)

    Zhang, Chang-ai; Guo, Honghai; Yang, Zhengtao; Xin, Shurong

    2017-11-01

    In order to promote biogas-slurry commercialization, A method was put forward to valuate biogas slurry based on its nutrient contents. Firstly, element contents of biogas slurry was measured; Secondly, each element was valuated based on its market price, and then traffic cost, using cost and market effect were taken into account, the pricing method of biogas slurry were obtained lastly. This method could be useful in practical production. Taking cattle manure raw meterial biogas slurry and con stalk raw material biogas slurry for example, their price were 38.50 yuan RMB per ton and 28.80 yuan RMB per ton. This paper will be useful for recognizing the value of biogas projects, ensuring biogas project running, and instructing the cyclic utilization of biomass resources in China.

  17. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  18. A joint tracking method for NSCC based on WLS algorithm

    Science.gov (United States)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  19. FPGA Implementation of Uniform Random Number based on Residue Method

    Directory of Open Access Journals (Sweden)

    Zulfikar .

    2014-04-01

    Full Text Available This paper presents the implementation and comparisons of uniform random number on Field Programable Gate Array (FPGA. Uniform random numbers are generated based on residue method. The circuit of generating uniform random number is presented in general view. The circuit is constructed from a multiplexer, a multiplier, buffers and some basic gates. FPGA implementation of the designed circuit has been done into various Xilinx chips. Simulation results are viewed clearly in the paper. Random numbers are generated based on different parameters. Comparisons upon occupied area and maximum frequency from different Xilinx chip are examined. Virtex 7 is the fastest chip and Virtex 4 is the best choice in terms of occupied area. Finally, Uniform random numbers have been generated successfully on FPGA using residue method.Keywords: FPGA implementation, random number, uniform random number, residue method, Xilinx chips

  20. Liver 4DMRI: A retrospective image-based sorting method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano 20133 (Italy); Summers, Paul [Division of Radiology, Istituto Europeo di Oncologia, Milano 20133 (Italy); Bellomi, Massimo [Division of Radiology, Istituto Europeo di Oncologia, Milano 20133, Italy and Department of Health Sciences, Università di Milano, Milano 20133 (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia 27100 (Italy)

    2015-08-15

    Purpose: Four-dimensional magnetic resonance imaging (4DMRI) is an emerging technique in radiotherapy treatment planning for organ motion quantification. In this paper, the authors present a novel 4DMRI retrospective image-based sorting method, providing reduced motion artifacts than using a standard monodimensional external respiratory surrogate. Methods: Serial interleaved 2D multislice MRI data were acquired from 24 liver cases (6 volunteers + 18 patients) to test the proposed 4DMRI sorting. Image similarity based on mutual information was applied to automatically identify a stable reference phase and sort the image sequence retrospectively, without the use of additional image or surrogate data to describe breathing motion. Results: The image-based 4DMRI provided a smoother liver profile than that obtained from standard resorting based on an external surrogate. Reduced motion artifacts were observed in image-based 4DMRI datasets with a fitting error of the liver profile measuring 1.2 ± 0.9 mm (median ± interquartile range) vs 2.1 ± 1.7 mm of the standard method. Conclusions: The authors present a novel methodology to derive a patient-specific 4DMRI model to describe organ motion due to breathing, with improved image quality in 4D reconstruction.

  1. Data assimilation method based on the constraints of confidence region

    Science.gov (United States)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  2. Fuzzy Critical Path Method Based on Lexicographic Ordering

    Directory of Open Access Journals (Sweden)

    Phani Bushan Rao P

    2012-01-01

    Full Text Available The Critical Path Method (CPM is useful for planning and control of complex projects. The CPM identifies the critical activities in the critical path of an activity network. The successful implementation of CPM requires the availability of clear determined time duration for each activity. However, in practical situations this requirement is usually hard to fulfil since many of activities will be executed for the first time. Hence, there is always uncertainty about the time durations of activities in the network planning.  This has led to the development of fuzzy CPM.  In this paper, we use a Lexicographic ordering method for ranking fuzzy numbers to a critical path method in a fuzzy project network, where the duration time of each activity is represented by a trapezoidal fuzzy number. The proposed method is compared with fuzzy CPM based on different ranking methods of fuzzy numbers. The comparison reveals that the method proposed in this paper is more effective in determining the activity criticalities and finding the critical path.   This new method is simple in calculating fuzzy critical path than many methods proposed so far in literature.  

  3. [Galaxy/quasar classification based on nearest neighbor method].

    Science.gov (United States)

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  4. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  5. Photonic arbitrary waveform generator based on Taylor synthesis method

    DEFF Research Database (Denmark)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji

    2016-01-01

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...

  6. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  7. Arts-Based Methods in Education Around the World

    DEFF Research Database (Denmark)

    Arts-Based Methods in Education Around the World aims to investigate arts-based encounters in educational settings in response to a global need for studies that connect the cultural, inter-cultural, cross-cultural, and global elements of arts-based methods in education. In this extraordinary...... collection, contributions are collected from experts all over the world and involve a multiplicity of arts genres and traditions. These contributions bring together diverse cultural and educational perspectives and include a large variety of artistic genres and research methodologies. The topics covered...... in the book range from policies to pedagogies, from social impact to philosophical conceptualisations. They are informative on specific topics, but also offer a clear monitoring of the ways in which the general attention to the arts in education evolves through time....

  8. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  9. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang

    2012-03-16

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  10. Distributed Research Project Scheduling Based on Multi-Agent Methods

    Directory of Open Access Journals (Sweden)

    Constanta Nicoleta Bodea

    2011-01-01

    Full Text Available Different project planning and scheduling approaches have been developed. The Operational Research (OR provides two major planning techniques: CPM (Critical Path Method and PERT (Program Evaluation and Review Technique. Due to projects complexity and difficulty to use classical methods, new approaches were developed. Artificial Intelligence (AI initially promoted the automatic planner concept, but model-based planning and scheduling methods emerged later on. The paper adresses the project scheduling optimization problem, when projects are seen as Complex Adaptive Systems (CAS. Taken into consideration two different approaches for project scheduling optimization: TCPSP (Time- Constrained Project Scheduling and RCPSP (Resource-Constrained Project Scheduling, the paper focuses on a multiagent implementation in MATLAB for TCSP. Using the research project as a case study, the paper includes a comparison between two multi-agent methods: Genetic Algorithm (GA and Ant Colony Algorithm (ACO.

  11. New method for diagnosing cast compactness based on laser ultrasonography

    Directory of Open Access Journals (Sweden)

    P. Swornowski

    2010-01-01

    Full Text Available Technologically advanced materials, such as alloys of aluminum, nickel or titanium are currently used increasingly often in significantly loaded components utilized in the aviation industry, among others in the construction of jet turbine engine blades. The article presents a method for diagnosing the condition of the inside of cast blades with the use of laser ultrasonography. The inspection is based on finding hidden flaws with a size of between 10 and 30μm. Laser ultrasonography offers a number of improvements over the non-destructive methods used so far, e.g. the possibility to diagnose the cast on a selected depth, high signal-to-noise ratio and good sensitivity. The article includes a brief overview of non-destructive inspection methods used in foundry engineering and sample results of inspecting the inner structure of a turbo jet engine blade using the method described in the article.

  12. Assessment of Soil Liquefaction Potential Based on Numerical Method

    DEFF Research Database (Denmark)

    Choobasti, A. Janalizadeh; Vahdatirad, Mohammad Javad; Torabi, M.

    2012-01-01

    simplified method have been developed over the years. Although simplified methods are available in calculating the liquefaction potential of a soil deposit and shear stresses induced at any point in the ground due to earthquake loading, these methods cannot be applied to all earthquakes with the same...... accuracy, also they lack the potential to predict the pore pressure developed in the soil. Therefore, it is necessary to carry out a ground response analysis to obtain pore pressures and shear stresses in the soil due to earthquake loading. Using soil historical, geological and compositional criteria......, a zone of the corridor of Tabriz urban railway line 2 susceptible to liquefaction was recognized. Then, using numerical analysis and cyclic stress method using QUAKE/W finite element code, soil liquefaction potential in susceptible zone was evaluated based on design earthquake....

  13. Object Recognition using Feature- and Color-Based Methods

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  14. Training Methods to Improve Evidence-Based Medicine Skills

    Directory of Open Access Journals (Sweden)

    Filiz Ozyigit

    2010-06-01

    Full Text Available Evidence based medicine (EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It is estimated that only 15% of medical interventions is evidence-based. Increasing demand, new technological developments, malpractice legislations, a very speed increase in knowledge and knowledge sources push the physicians forward for EBM, but at the same time increase load of physicians by giving them the responsibility to improve their skills. Clinical maneuvers are needed more, as the number of clinical trials and observational studies increase. However, many of the physicians, who are in front row of patient care do not use this increasing evidence. There are several examples related to different training methods in order to improve skills of physicians for evidence based practice. There are many training methods to improve EBM skills and these trainings might be given during medical school, during residency or as continuous trainings to the actual practitioners in the field. It is important to discuss these different training methods in our country as well and encourage dissemination of feasible and effective methods. [TAF Prev Med Bull 2010; 9(3.000: 245-254

  15. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  16. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang

    2011-03-01

    We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.

  17. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  18. Inverse Method of Centrifugal Pump Impeller Based on Proper Orthogonal Decomposition (POD) Method

    Science.gov (United States)

    Zhang, Ren-Hui; Guo, Rong; Yang, Jun-Hu; Luo, Jia-Qi

    2017-07-01

    To improve the accuracy and reduce the calculation cost for the inverse problem of centrifugal pump impeller, the new inverse method based on proper orthogonal decomposition (POD) is proposed. The pump blade shape is parameterized by quartic Bezier curve, and the initial snapshots is generated by introducing the perturbation of the blade shape control parameters. The internal flow field and its hydraulic performance is predicted by CFD method. The snapshots vector includes the blade shape parameter and the distribution of blade load. The POD basis for the snapshots set are deduced by proper orthogonal decomposition. The sample vector set is expressed in terms of the linear combination of the orthogonal basis. The objective blade shape corresponding to the objective distribution of blade load is obtained by least square fit. The Iterative correction algorithm for the centrifugal pump blade inverse method based on POD is proposed. The objective blade load distributions are corrected according to the difference of the CFD result and the POD result. The two dimensional and three dimensional blade calculation cases show that the proposed centrifugal pump blade inverse method based on POD have good convergence and high accuracy, and the calculation cost is greatly reduced. After two iterations, the deviation of the blade load and the pump hydraulic performance are limited within 4.0% and 6.0% individually for most of the flow rate range. This paper provides a promising inverse method for centrifugal pump impeller, which will benefit the hydraulic optimization of centrifugal pump.

  19. Development of redesign method of production system based on QFD

    Science.gov (United States)

    Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi

    In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.

  20. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  1. [A retrieval method of drug molecules based on graph collapsing].

    Science.gov (United States)

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1

  2. a SAR Image Registration Method Based on Sift Algorithm

    Science.gov (United States)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  3. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    HAZOP is the most commonly used process hazard analysis tool in industry, a systematic yet tedious and time consuming method. The aim of this study is to explore the feasibility of process dynamic simulations to facilitate the HAZOP studies. We propose a simulation-based methodology to complement...... the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...

  4. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  5. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  6. Register-based statistics statistical methods for administrative data

    CERN Document Server

    Wallgren, Anders

    2014-01-01

    This book provides a comprehensive and up to date treatment of  theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi

  7. A Case-Based Reasoning Method with Rank Aggregation

    Science.gov (United States)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  8. Mutton Traceability Method Based on Internet of Things

    Directory of Open Access Journals (Sweden)

    Wu Min-Ning

    2014-01-01

    Full Text Available In order to improve the mutton traceability efficiency for Internet of Things and solve the problem of data transmission, analyzed existing tracking algorithm, proposed the food traceability application model, Petri network model of food traceability and food traceability of time series data of improved K-means algorithm based on the Internet of things. The food traceability application model to convert, integrate and mine the heterogeneous information, implementation of the food safety traceability information management, Petri network model for food traceability in the process of the state transition were analyzed and simulated and provides a theoretical basis to study the behavior described in the food traceability system and structural design. The experiments on simulation data show that the proposed traceability method based on Internet of Things is more effective for mutton traceability data than the traditional K-means methods.

  9. Parameter Tuning for Local-Search-Based Matheuristic Methods

    Directory of Open Access Journals (Sweden)

    Guillermo Cabrera-Guerrero

    2017-01-01

    Full Text Available Algorithms that aim to solve optimisation problems by combining heuristics and mathematical programming have attracted researchers’ attention. These methods, also known as matheuristics, have been shown to perform especially well for large, complex optimisation problems that include both integer and continuous decision variables. One common strategy used by matheuristic methods to solve such optimisation problems is to divide the main optimisation problem into several subproblems. While heuristics are used to seek for promising subproblems, exact methods are used to solve them to optimality. In general, we say that both mixed integer (nonlinear programming problems and combinatorial optimisation problems can be addressed using this strategy. Beside the number of parameters researchers need to adjust when using heuristic methods, additional parameters arise when using matheuristic methods. In this paper we focus on one particular parameter, which determines the size of the subproblem. We show how matheuristic performance varies as this parameter is modified. We considered a well-known NP-hard combinatorial optimisation problem, namely, the capacitated facility location problem for our experiments. Based on the obtained results, we discuss the effects of adjusting the size of subproblems that are generated when using matheuristics methods such as the one considered in this paper.

  10. Edge detection methods based on generalized type-2 fuzzy logic

    CERN Document Server

    Gonzalez, Claudia I; Castro, Juan R; Castillo, Oscar

    2017-01-01

    In this book four new methods are proposed. In the first method the generalized type-2 fuzzy logic is combined with the morphological gra-dient technique. The second method combines the general type-2 fuzzy systems (GT2 FSs) and the Sobel operator; in the third approach the me-thodology based on Sobel operator and GT2 FSs is improved to be applied on color images. In the fourth approach, we proposed a novel edge detec-tion method where, a digital image is converted a generalized type-2 fuzzy image. In this book it is also included a comparative study of type-1, inter-val type-2 and generalized type-2 fuzzy systems as tools to enhance edge detection in digital images when used in conjunction with the morphologi-cal gradient and the Sobel operator. The proposed generalized type-2 fuzzy edge detection methods were tested with benchmark images and synthetic images, in a grayscale and color format. Another contribution in this book is that the generalized type-2 fuzzy edge detector method is applied in the preproc...

  11. Acoustic design method of ship's cabin based on geometrical acoustics

    Directory of Open Access Journals (Sweden)

    FENG Aijing

    2017-08-01

    Full Text Available In light of the question of how to select the best noise control position and measures in the large noise transmission path of the cabins of a ship, based on the acoustic ray-tracing method in the theory of geometrical acoustics, and by considering the effect of the sound transmission of the bulkhead, this paper proposes the sound line search method. It is used to calculate the sound pressure of a ship's cabin, allowing the sound field distribution of multiple compartments to be simulated. The paper proposes a sound ray-searching method in which the acoustic sensitivity of different positions of the bulkhead to the noise of the target cabin is calculated by searching for the sound ray passing the target cabin. According to this, a cabin noise reduction plan can be designed to optimize medium and high frequency cabin noise. With this method, the noise of a typical cabin can be optimized and reduced by 7.3 dB. Through comparative analysis with the statistical energy method, it is proven that the method is feasible and can guide the refined design of noise reduction in ships' cabins.

  12. An urban storm-inundation simulation method based on GIS

    Science.gov (United States)

    Zhang, Shanghong; Pan, Baozhu

    2014-09-01

    With the increase of urbanization, conditions of the underlying surface and climate have been changed by human activities. This results in more frequent flooding and inundation problems in urban areas. Storm-inundation models based on hydrology and hydrodynamics require a large amount of input data (detailed terrain, sewer system and land use data). Simulation models are complex and difficult to build and run. To determine inundation conditions quickly with only a few commonly available input data, an urban storm-inundation simulation method (USISM) based on geographic information systems (GIS) is proposed in this paper. The method is a kind of simplified distributed hydrologic model based on DEM, in this method, depressions in the terrain are regarded as the basic inundated area. The amount of water that can be stored in a depression indicates the final inundation distribution. The runoff catchment area and maximum storage volume of a depression, and the flow direction between these depressions are all considered in the final inundation simulation. GIS technology is used to find the depressions in an area, divide the subcatchment for each depression, and obtain the flow order of the depressions based on a digital elevation model (DEM). The SCS method is used to calculate storm runoff, and a water balance equation is used to calculate water storage in each depression. Nangang District in Harbin City, China, is selected as the study area to verify the USISM. The result reveals that the USISM could find inundation locations in the urban area and quickly calculate inundation depth and area. The USISM is valuable for simulating storms of short duration in an urban area with a few commonly available input data.

  13. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S.; Sedukhin, S. [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I.

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  14. An assembly sequence planning method based on composite algorithm

    OpenAIRE

    Enfu LIU; Bo LIU; Xiaoyang LIU; Yi LI

    2016-01-01

    To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm...

  15. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  16. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  17. SUPPLY CHAIN COLLABORATIVE FORECASTING METHODS BASED ON FACTORS

    OpenAIRE

    TONG SHU; SHOU CHEN; SHOUYANG WANG; XIULI CHAO

    2011-01-01

    This paper proposes that sales and demands information are equally important in the supply chain. It discusses the role of factors in chorological forecasting and puts forth the supply chain collaborative forecasting methods based on factors and presents the relevant empirical studies. In the light of the historical actual sales data, factors of Spring Festival transportation, shutting down for examinations, and repairs and minor repairs are extracted and quantified in different hierarchies a...

  18. Quantitative data standardization of X-ray based densitometry methods

    Science.gov (United States)

    Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.

    2018-02-01

    In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.

  19. Geophysics-based method of locating a stationary earth object

    Science.gov (United States)

    Daily, Michael R [Albuquerque, NM; Rohde, Steven B [Corrales, NM; Novak, James L [Albuquerque, NM

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  20. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  1. Automatic seamless image mosaic method based on SIFT features

    Science.gov (United States)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  2. Dim target detection method based on salient graph fusion

    Science.gov (United States)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  3. Apple Shape Classification Method Based on Wavelet Moment

    Directory of Open Access Journals (Sweden)

    Jiangsheng Gui

    2014-09-01

    Full Text Available Shape is not only an important indicator for assessing the grade of the apple, but also the important factors for increasing the value of the apple. In order to improve the apple shape classification accuracy rate, an approach for apple shape sorting based on wavelet moments was proposed, the image was first subjected to a normalization process using its regular moments to obtain scale and translation invariance, the rotation invariant wavelet moment features were then extracted from the scale and translation normalized images and the method of cluster analysis was used for finished the shape classification. This method performs better than traditional approaches such as Fourier descriptors and Zernike moments, because of that Wavelet moments can provide time-domain and frequency domain window, which was verified by experiments. The normal fruit shape, mild deformity and severe deformity classification accuracy is 86.21 %, 85.82 %, 90.81 % by our method.

  4. THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING

    Directory of Open Access Journals (Sweden)

    M.T. Adithia

    2015-01-01

    Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.

  5. Validation of analytical methods based on accuracy profiles.

    Science.gov (United States)

    Feinberg, Max

    2007-07-27

    Validation is a very living field in analytical chemistry as illustrated by the numerous publications addressing this topic. But, there is some ambiguity in this concept and the abundant vocabulary often does not help the analytical chemist. This paper presents a new method based on the fitness-for-purpose approach of the validation. It consists in building a graphical decision-making tool, called the accuracy profile. Using measurements collected under reproducibility or intermediate precision condition, it allows computing an interval where a known proportion of future measurements will be located. When comparing this interval to an acceptability interval defined by the result end-user it is possible to simply decide whether a method is valid or not. The fundamentals of this method are presented starting from an accepted definition of validation. An example of application illustrates how validation can be experimentally organized and conclusion made.

  6. A Micromechanics-Based Method for Multiscale Fatigue Prediction

    Science.gov (United States)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  7. Hybrid Orientation Based Human Limbs Motion Tracking Method.

    Science.gov (United States)

    Glonek, Grzegorz; Wojciechowski, Adam

    2017-12-09

    One of the key technologies that lays behind the human-machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach-orientation based data fusion-instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method's accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%.

  8. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Reliability Analysis of Aircraft Equipment Based on FMECA Method

    Science.gov (United States)

    Jun, Li; Huibin, Xu

    It is well known that reliability of aircraft equipment is very important to aircraft during the flight because performance of aircraft product can affect flight safe directly. In order to make the equipment work normally, FMECA is applied in an aircraft equipment to analyze its reliability and improve operational reliability of the product. Through its reliability mathematical model, average of operational time is predicted based on calculating failure probability of all electrical components. According to the process of reliability theory FMECA, all kinds of the failure mode, reasons, effects and criticality of the products can be determined completely. By comparing these criticality data as shown, the paper analyses adopted method by that the contents, accents and operating process of maintenance may be instituted finally. FMECA-based method for reliability analysis of the equipment and the equipment maintenance performs well. The results indicate that application of FMECA method can analyze reliability in detail and improve operational reliability of the equipment. Therefore this will supply theoretical bases and concrete measures of maintenance of the products to improve operational reliability of products. FMECA can be feasible and effective for improving operational reliability of all aircraft equipments.

  10. A new incomplete pattern classification method based on evidential reasoning.

    Science.gov (United States)

    Liu, Zhun-Ga; Pan, Quan; Mercier, Gregoire; Dezert, Jean

    2015-04-01

    The classification of incomplete patterns is a very challenging task because the object (incomplete pattern) with different possible estimations of missing values may yield distinct classification results. The uncertainty (ambiguity) of classification is mainly caused by the lack of information of the missing data. A new prototype-based credal classification (PCC) method is proposed to deal with incomplete patterns thanks to the belief function framework used classically in evidential reasoning approach. The class prototypes obtained by training samples are respectively used to estimate the missing values. Typically, in a c -class problem, one has to deal with c prototypes, which yield c estimations of the missing values. The different edited patterns based on each possible estimation are then classified by a standard classifier and we can get at most c distinct classification results for an incomplete pattern. Because all these distinct classification results are potentially admissible, we propose to combine them all together to obtain the final classification of the incomplete pattern. A new credal combination method is introduced for solving the classification problem, and it is able to characterize the inherent uncertainty due to the possible conflicting results delivered by different estimations of the missing values. The incomplete patterns that are very difficult to classify in a specific class will be reasonably and automatically committed to some proper meta-classes by PCC method in order to reduce errors. The effectiveness of PCC method has been tested through four experiments with artificial and real data sets.

  11. A MUSIC-based method for SSVEP signal processing.

    Science.gov (United States)

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  12. Column density estimation: Tree-based method implementation

    Science.gov (United States)

    Valdivia, Valeska

    2013-07-01

    The radiative transfer plays a crucial role in several astrophysical processes. In particular for the star formation problem it is well established that stars form in the densest and coolest regions in molecular clouds then understanding the interstellar cycle becomes crucial. The physics of dense gas requires the knowledge of the UV radiation that regulates the physics and the chemistry within the molecular cloud. The numerical modelization needs the calculation of column densities in any direction for each resolution element. In numerical simulations the cost of solving the radiative transfer problem is of the order of N^5/3, where N is the number of resolution elements. The exact calculation is in general extremely expensive in terms of CPU time for relatively large simulations and impractical in parallel computing. We present our tree-based method for estimating column densities and the attenuation factor for the UV field. The method is inspired by the fact that any distant cell subtends a small angle and therefore its contribution to the screening will be diluted. This method is suitable for parallel computing and no communication is needed between different CPUs. It has been implemented into the RAMSES code, a grid-based solver with adaptive mesh refinement (AMR). We present the results of two tests and a discussion on the accuracy and the performance of this method. We show that the UV screening affects mainly the dense parts of molecular clouds, changing locally the Jeans mass and therefore affecting the fragmentation.

  13. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  14. Combining Neural Methods and Knowledge-Based Methods in Accident Management

    Directory of Open Access Journals (Sweden)

    Miki Sirola

    2012-01-01

    Full Text Available Accident management became a popular research issue in the early 1990s. Computerized decision support was studied from many points of view. Early fault detection and information visualization are important key issues in accident management also today. In this paper we make a brief review on this research history mostly from the last two decades including the severe accident management. The author’s studies are reflected to the state of the art. The self-organizing map method is combined with other more or less traditional methods. Neural methods used together with knowledge-based methods constitute a methodological base for the presented decision support prototypes. Two application examples with modern decision support visualizations are introduced more in detail. A case example of detecting a pressure drift on the boiling water reactor by multivariate methods including innovative visualizations is studied in detail. Promising results in early fault detection are achieved. The operators are provided by added information value to be able to detect anomalies in an early stage already. We provide the plant staff with a methodological tool set, which can be combined in various ways depending on the special needs in each case.

  15. Fully Digital Chaotic Differential Equation-based Systems And Methods

    KAUST Repository

    Radwan, Ahmed Gomaa Ahmed

    2012-09-06

    Various embodiments are provided for fully digital chaotic differential equation-based systems and methods. In one embodiment, among others, a digital circuit includes digital state registers and one or more digital logic modules configured to obtain a first value from two or more of the digital state registers; determine a second value based upon the obtained first values and a chaotic differential equation; and provide the second value to set a state of one of the plurality of digital state registers. In another embodiment, a digital circuit includes digital state registers, digital logic modules configured to obtain outputs from a subset of the digital shift registers and to provide the input based upon a chaotic differential equation for setting a state of at least one of the subset of digital shift registers, and a digital clock configured to provide a clock signal for operating the digital shift registers.

  16. Knowledge and method base for shape memory alloys

    Energy Technology Data Exchange (ETDEWEB)

    Welp, E.G.; Breidert, J. [Ruhr-University Bochum, Institute of Engineering Design, 44780 Bochum (Germany)

    2004-05-01

    It is often impossible for design engineers to decide whether it is possible to use shape memory alloys (SMA) for a particular task. In case of a decision to use SMA for product development, design engineers normally do not know in detail how to proceed in a correct and beneficial way. In order to support design engineers who have no previous knowledge about SMA and to assist in the transfer of results from basic research to industrial practice, an essential knowledge and method base has been developed. Through carefully conducted literature studies and patent analysis material and design information could be collected. All information is implemented into a computer supported knowledge and method base that provides design information with a particular focus on the conceptual and embodiment design phase. The knowledge and method base contains solution principles and data about effects, material and manufacturing as well as design guidelines and calculation methods for dimensioning and optimization. A browser-based user interface ensures that design engineers have immediate access to the latest version of the knowledge and method base. In order to ensure a user friendly application, an evaluation with several test users has been carried out. Reactions of design engineers from the industrial sector underline the need for support related to knowledge on SMA. (Abstract Copyright [2004], Wiley Periodicals, Inc.) [German] Fuer Konstrukteure ist es haeufig schwierig zu entscheiden, ob sich der Einsatz von Formgedaechtnislegierungen (FGL) fuer eine bestimmte Aufgabe eignet. Fuer den Fall, dass FGL fuer die Produktentwicklung genutzt werden sollen, besitzen Ingenieure zumeist nur unzureichende Detailkenntnisse, um Formgedaechtnislegierungen richtig und in vorteilhafter Weise anwenden zu koennen. Zur Unterstuetzung von Konstrukteuren, die ueber kein Vorwissen und keine Erfahrungen zu FGL verfuegen und zum Transfer von Forschungsergebnissen in die industrielle Praxis, ist eine

  17. A Statistic-Based Calibration Method for TIADC System

    Directory of Open Access Journals (Sweden)

    Kuojun Yang

    2015-01-01

    Full Text Available Time-interleaved technique is widely used to increase the sampling rate of analog-to-digital converter (ADC. However, the channel mismatches degrade the performance of time-interleaved ADC (TIADC. Therefore, a statistic-based calibration method for TIADC is proposed in this paper. The average value of sampling points is utilized to calculate offset error, and the summation of sampling points is used to calculate gain error. After offset and gain error are obtained, they are calibrated by offset and gain adjustment elements in ADC. Timing skew is calibrated by an iterative method. The product of sampling points of two adjacent subchannels is used as a metric for calibration. The proposed method is employed to calibrate mismatches in a four-channel 5 GS/s TIADC system. Simulation results show that the proposed method can estimate mismatches accurately in a wide frequency range. It is also proved that an accurate estimation can be obtained even if the signal noise ratio (SNR of input signal is 20 dB. Furthermore, the results obtained from a real four-channel 5 GS/s TIADC system demonstrate the effectiveness of the proposed method. We can see that the spectra spurs due to mismatches have been effectively eliminated after calibration.

  18. Selection of construction methods: a knowledge-based approach.

    Science.gov (United States)

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  19. A Blade Tip Timing Method Based on a Microwave Sensor

    Directory of Open Access Journals (Sweden)

    Jilong Zhang

    2017-05-01

    Full Text Available Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  20. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  1. Research on Automotive Dynamic Weighing Method Based on Piezoelectric Sensor

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2017-01-01

    Full Text Available In order to effectively measure the dynamic axle load of vehicles in motion, the dynamic weighing method of vehicles based on piezoelectric sensor was studied. Firstly, the influencing factors of the measurement accuracy in the dynamic weighing process were analyzed systematically, and the impacts of road irregularities and dynamic weighing system vibration on measurement error were discussed. On the basis of the analysis, the arithmetic mean filter method was used in the software algorithm to filter out the periodic interference added in the sensor signal, the most suitable n value was selected to get the better filtering result by simulation comparison. Then, the dynamic axle load calculation model of high speed vehicles was studied deeply, based on the theoretical response curve of the sensor, the dynamic axle load calculation method based on frequency reconstruction was established according to actual measurement signals of sensors and the analysis from time domain and frequency domain, also the least square method was used to realize the identification of temperature correction coefficient. A large amount of data that covered the usual vehicle weighing range was collected by experiment. The results show that the dynamic weighing signal system identification error all controlled within 10% at the same temperature and 60% of the vehicle data error can be controlled within 7%. The temperature correction coefficient and the correction formula at different temperatures ranges are well adapted to ensure that the vehicle temperature error at different temperatures can also be controlled within 10% and 70% of the vehicle data error within 7%. Furthermore, the weighing results remain stable regardless of the speed of the vehicle which meets the requirements for high-speed dynamic weighing.

  2. OWL-based reasoning methods for validating archetypes.

    Science.gov (United States)

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. A genetic algorithm based method for neutron spectrum unfolding

    International Nuclear Information System (INIS)

    Suman, Vitisha; Sarkar, P.K.

    2013-03-01

    An approach to neutron spectrum unfolding based on a stochastic evolutionary search mechanism - Genetic Algorithm (GA) is presented. It is tested to unfold a set of simulated spectra, the unfolded spectra is compared to the output of a standard code FERDOR. The method was then applied to a set of measured pulse height spectrum of neutrons from the AmBe source as well as of emitted neutrons from Li(p,n) and Ag(C,n) nuclear reactions carried out in the accelerator environment. The unfolded spectra compared to the output of FERDOR show good agreement in the case of AmBe spectra and Li(p,n) spectra. In the case of Ag(C,n) spectra GA method results in some fluctuations. Necessity of carrying out smoothening of the obtained solution is also studied, which leads to approximation of the solution yielding an appropriate solution finally. Few smoothing techniques like second difference smoothing, Monte Carlo averaging, combination of both and gaussian based smoothing methods are also studied. Unfolded results obtained after inclusion of the smoothening criteria are in close agreement with the output obtained from the FERDOR code. The present method is also tested on a set of underdetermined problems, the outputs of which is compared to the unfolded spectra obtained from the FERDOR applied to a completely determined problem, shows a good match. The distribution of the unfolded spectra is also studied. Uncertainty propagation in the unfolded spectra due to the errors present in the measurement as well as the response function is also carried out. The method appears to be promising for unfolding the completely determined as well as underdetermined problems. It also has provisions to carry out the uncertainty analysis. (author)

  4. Current trends in virtual high throughput screening using ligand-based and structure-based methods.

    Science.gov (United States)

    Sukumar, Nagamani; Das, Sourav

    2011-12-01

    High throughput in silico methods have offered the tantalizing potential to drastically accelerate the drug discovery process. Yet despite significant efforts expended by academia, national labs and industry over the years, many of these methods have not lived up to their initial promise of reducing the time and costs associated with the drug discovery enterprise, a process that can typically take over a decade and cost hundreds of millions of dollars from conception to final approval and marketing of a drug. Nevertheless structure-based modeling has become a mainstay of computational biology and medicinal chemistry, helping to leverage our knowledge of the biological target and the chemistry of protein-ligand interactions. While ligand-based methods utilize the chemistry of molecules that are known to bind to the biological target, structure-based drug design methods rely on knowledge of the three-dimensional structure of the target, as obtained through crystallographic, spectroscopic or bioinformatics techniques. Here we review recent developments in the methodology and applications of structure-based and ligand-based methods and target-based chemogenomics in Virtual High Throughput Screening (VHTS), highlighting some case studies of recent applications, as well as current research in further development of these methods. The limitations of these approaches will also be discussed, to give the reader an indication of what might be expected in years to come.

  5. A questionnaire based evaluation of teaching methods amongst MBBS students

    Directory of Open Access Journals (Sweden)

    Muneshwar JN, Mirza Shiraz Baig, Zingade US, Khan ST

    2013-01-01

    Full Text Available The medical education and health care in India are facing serious challenges in content and competencies. Heightened focus on the quality of teaching in medical college has led to increased use of student surveys as a means of evaluating teaching. Objectives: A questionnaire based evaluation of 200 students (I MBBS & II MBBS about teaching methods was conducted at a Govt Medical College & Hospital, Aurangabad (MS with intake capacity of 150 students &established since 50 last years. Methods: 200 medical students of I MBBS & II MBBS voluntarily participated in the study. Based on teaching methods, an objective questionnaire paper was given to the participants to be solved in 1 hour. Results: As a teaching mode 59% of the students favored group discussion versus didactic lectures (14%. Almost 48% felt that those didactic lectures fail to create interest & motivation. Around 66% were aware of learning objectives. Conclusion: Strategies and futuristic plans need to be implemented so that medical education in India is innovative & creates motivation.

  6. A supervoxel-based segmentation method for prostate MR images

    Science.gov (United States)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  7. An Improved Information Hiding Method Based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Minghai Yao

    2015-01-01

    Full Text Available A novel biometric authentication information hiding method based on the sparse representation is proposed for enhancing the security of biometric information transmitted in the network. In order to make good use of abundant information of the cover image, the sparse representation method is adopted to exploit the correlation between the cover and biometric images. Thus, the biometric image is divided into two parts. The first part is the reconstructed image, and the other part is the residual image. The biometric authentication image cannot be restored by any one part. The residual image and sparse representation coefficients are embedded into the cover image. Then, for the sake of causing much less attention of attackers, the visual attention mechanism is employed to select embedding location and embedding sequence of secret information. Finally, the reversible watermarking algorithm based on histogram is utilized for embedding the secret information. For verifying the validity of the algorithm, the PolyU multispectral palmprint and the CASIA iris databases are used as biometric information. The experimental results show that the proposed method exhibits good security, invisibility, and high capacity.

  8. 2-D tiles declustering method based on virtual devices

    Science.gov (United States)

    Li, Zhongmin; Gao, Lu

    2009-10-01

    Generally, 2-D spatial data are divided as a series of tiles according to the plane grid. To satisfy the effect of vision, the tiles in the query window including the view point would be displayed quickly at the screen. Aiming at the performance difference of real storage devices, we propose a 2-D tiles declustering method based on virtual device. Firstly, we construct a group of virtual devices which have same storage performance and non-limited capacity, then distribute the tiles into M virtual devices according to the query window of 2-D tiles. Secondly, we equably map the tiles in M virtual devices into M equidistant intervals in [0, 1) using pseudo-random number generator. Finally, we devide [0, 1) into M intervals according to the tiles distribution percentage of every real storage device, and distribute the tiles in each interval in the corresponding real storage device. We have designed and realized a prototype GlobeSIGht, and give some related test results. The results show that the average response time of each tile in the query window including the view point using 2-D tiles declustering method based on virtual device is more efficient than using other methods.

  9. Integrated method for the measurement of trace nitrogenous atmospheric bases

    Directory of Open Access Journals (Sweden)

    D. Key

    2011-12-01

    Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  10. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  11. Hybrid Orientation Based Human Limbs Motion Tracking Method

    Directory of Open Access Journals (Sweden)

    Grzegorz Glonek

    2017-12-01

    Full Text Available One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect and inertial measurement units (IMU. The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%.

  12. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  13. Transit Traffic Analysis Zone Delineating Method Based on Thiessen Polygon

    Directory of Open Access Journals (Sweden)

    Shuwei Wang

    2014-04-01

    Full Text Available A green transportation system composed of transit, busses and bicycles could be a significant in alleviating traffic congestion. However, the inaccuracy of current transit ridership forecasting methods is imposing a negative impact on the development of urban transit systems. Traffic Analysis Zone (TAZ delineating is a fundamental and essential step in ridership forecasting, existing delineating method in four-step models have some problems in reflecting the travel characteristics of urban transit. This paper aims to come up with a Transit Traffic Analysis Zone delineation method as supplement of traditional TAZs in transit service analysis. The deficiencies of current TAZ delineating methods were analyzed, and the requirements of Transit Traffic Analysis Zone (TTAZ were summarized. Considering these requirements, Thiessen Polygon was introduced into TTAZ delineating. In order to validate its feasibility, Beijing was then taken as an example to delineate TTAZs, followed by a spatial analysis of office buildings within a TTAZ and transit station departure passengers. Analysis result shows that the TTAZs based on Thiessen polygon could reflect the transit travel characteristic and is of in-depth research value.

  14. The Software Cost Estimation Method Based on Fuzzy Ontology

    Directory of Open Access Journals (Sweden)

    Plecka Przemysław

    2014-12-01

    Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.

  15. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  16. A Method to Measure the Bracelet Based on Feature Energy

    Science.gov (United States)

    Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang

    2017-12-01

    To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.

  17. Image denoising method based on FPGA in digital video transmission

    Science.gov (United States)

    Xiahou, Yaotao; Wang, Wanping; Huang, Tao

    2017-11-01

    In the image acquisition and transmission link, due to the acquisition of equipment and methods, the image would suffer some different degree of interference ,and the interference will reduce the quality of image which would influence the subsequent processing. Therefore, the image filtering and image enhancement are particularly important.The traditional image denoising algorithm smoothes the image while removing the noise, so that the details of the image are lost. In order to improve image quality and save image detail, this paper proposes an improved filtering algorithm based on edge detection, Gaussian filter and median filter. This method can not only reduce the noise effectively, but also the image details are saved relatively well, and the FPGA implementation scheme of this filter algorithm is also given in this paper.

  18. STAR-BASED METHODS FOR PLEIADES HR COMMISSIONING

    Directory of Open Access Journals (Sweden)

    S. Fourest

    2012-07-01

    Full Text Available PLEIADES is the highest resolution civilian earth observing system ever developed in Europe. This imagery program is conducted by the French National Space Agency, CNES. It has been operating since 2012 a first satellite PLEIADES-HR launched on 2011 December 17th, a second one should be launched by the end of the year. Each satellite is designed to provide optical 70 cm resolution colored images to civilian and defense users. Thanks to the extreme agility of the satellite, new calibration methods have been tested, based on the observation of celestial bodies, and stars in particular. It has then been made possible to perform MTF measurement, re-focusing, geometrical bias and focal plane assessment, absolute calibration, ghost images localization, micro-vibrations measurement, etc… Starting from an overview of the star acquisition process, this paper will discuss the methods and present the results obtained during the first four months of the commissioning phase.

  19. A guided wave dispersion compensation method based on compressed sensing

    Science.gov (United States)

    Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong

    2018-03-01

    The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.

  20. Image based method for aberration measurement of lithographic tools

    Science.gov (United States)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  1. A stereo vision method based on region segmentation

    International Nuclear Information System (INIS)

    Homma, K.; Fu, K.S.

    1984-01-01

    A stereo vision method based on segmented region information is presented in this paper. Regions that have uniform image properties are segmented on stereo images. The shapes of the regions are represented by chain codes. The weighted metrics between the region chain codes are calculated to explore the shape dissimilarities. From the minimum weight transformation of codes, partial shape matching can be found by adjusting the weights for code deletion, insertion and substitution. The partial shape matching gives stereo correspondences on the region contours even though the images have occlusion, segmentation noise and distortion. The depth interpolation is executed region by region by considering the occlusion. A depth image of a real indoor scene is extracted as an application example of this method

  2. Effectiveness of Spray-Based Decontamination Methods for ...

    Science.gov (United States)

    Report The objective of this project was to assess the effectiveness of spray-based common decontamination methods for inactivating Bacillus (B.) atrophaeus (surrogate for B. anthracis) spores and bacteriophage MS2 (surrogate for foot and mouth disease virus [FMDV]) on selected test surfaces (with or without a model agricultural soil load). Relocation of viable viruses or spores from the contaminated coupon surfaces into aerosol or liquid fractions during the decontamination methods was investigated. This project was conducted to support jointly held missions of the U.S. Department of Homeland Security (DHS) and the U.S. Environmental Protection Agency (EPA). Within the EPA, the project supports the mission of EPA’s Homeland Security Research Program (HSRP) by providing relevant information pertinent to the decontamination of contaminated areas resulting from a biological incident.

  3. Time Correlation Calculation Method Based on Delayed Coordinates

    Science.gov (United States)

    Morino, K.; Kobayashi, M. U.; Miyazaki, S.

    2009-06-01

    An approximate calculation method of time correlations by use of delayed coordinate is proposed. For a solvable piecewise linear hyperbolic chaotic map, this approximation is compared with the exact calculation, and an exponential convergence for the maximum time delay M is found. By use of this exponential convergence, the exact result for M &to ∞ is extrapolated from this approximation for the first few values of M. This extrapolation is shown to be much better than direct numerical simulations based on the definition of the time correlation function. As an application, the irregular dependence of diffusion coefficients similar to Takagi or Weierstrass functions is obtained from this approximation, which is indistinguishable from the exact result only at M = 2. The method is also applied to the dissipative Lozi and Hénon maps and the conservative standard map in order to show wide applicability.

  4. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2015-01-01

    Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.

  5. Improved GIS-based Methods for Traffic Noise Impact Assessment

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Bloch, Karsten Sand

    1996-01-01

    When vector-based GIS-packages are used for traffic noise impact assessments, the buffer-technique is usually employed for the study: 1. For each road segment buffer-zones representing different noise-intervals are generated, 2. The buffers from all road segments are smoothed together, and 3....... The number of buildings within the buffers are enumerated. This technique provides an inaccurate assessment of the noise diffusion since it does not correct for buildings barrier and reflection to noise. The paper presents the results from a research project where the traditional noise buffer technique...... was compared with a new method which includes these corrections. Both methods follow the Common Nordic Noise Calculation Model, although the traditional buffer technique ignores parts of the model. The basis for the work was a digital map of roads and building polygons, combined with a traffic- and road...

  6. Modified risk graph method using fuzzy rule-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Nait-Said, R., E-mail: r_nait_said@hotmail.com [LARPI Laboratory, Safety Department, Institute of Health and Occupational Safety, University of Batna, Road Med El-Hadi Boukhlouf, Batna (Algeria); Zidani, F., E-mail: fati_zidani@lycos.com [LSPIE Laboratory, Electrical Engineering Department, Faculty of Engineering, University of Batna, Road Med El-Hadi Boukhlouf, Batna 05000 (Algeria); Ouzraoui, N., E-mail: ouzraoui@yahoo.fr [LARPI Laboratory, Safety Department, Institute of Health and Occupational Safety, University of Batna, Road Med El-Hadi Boukhlouf, Batna (Algeria)

    2009-05-30

    The risk graph is one of the most popular methods used to determine the safety integrity level for safety instrumented functions. However, conventional risk graph as described in the IEC 61508 standard is subjective and suffers from an interpretation problem of risk parameters. Thus, it can lead to inconsistent outcomes that may result in conservative SILs. To overcome this difficulty, a modified risk graph using fuzzy rule-based system is proposed. This novel version of risk graph uses fuzzy scales to assess risk parameters and calibration may be made by varying risk parameter values. Furthermore, the outcomes which are numerical values of risk reduction factor (the inverse of the probability of failure on demand) can be compared directly with those given by quantitative and semi-quantitative methods such as fault tree analysis (FTA), quantitative risk assessment (QRA) and layers of protection analysis (LOPA).

  7. Comparison of three sensory profiling methods based on consumer perception

    DEFF Research Database (Denmark)

    Reinbach, Helene Christine; Giacalone, Davide; Ribeiro, Letícia Machado

    2014-01-01

    The present study compares three profiling methods based on consumer perceptions in their ability to discriminate and describe eight beers. Consumers (N=135) evaluated eight different beers using Check-All-That-Apply (CATA) methodology in two variations, with (n=63) and without (n=73) rating...... the intensity of the checked descriptors. With CATA, consumers rated 38 descriptors grouped in 7 overall categories (berries, floral, hoppy, nutty, roasted, spicy/herbal and woody). Additionally 40 of the consumers evaluated the same samples by partial Napping® followed by Ultra Flash Profiling (UFP). ANOVA...... comparisons the RV coefficients varied between 0.90 and 0.97, indicating a very high similarity between all three methods. These results show that the precision and reproducibility of sensory information obtained by consumers by CATA is comparable to that of Napping. The choice of methodology for consumer...

  8. A GPU-based mipmapping method for water surface visualization

    Science.gov (United States)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  9. Modified risk graph method using fuzzy rule-based approach.

    Science.gov (United States)

    Nait-Said, R; Zidani, F; Ouzraoui, N

    2009-05-30

    The risk graph is one of the most popular methods used to determine the safety integrity level for safety instrumented functions. However, conventional risk graph as described in the IEC 61508 standard is subjective and suffers from an interpretation problem of risk parameters. Thus, it can lead to inconsistent outcomes that may result in conservative SILs. To overcome this difficulty, a modified risk graph using fuzzy rule-based system is proposed. This novel version of risk graph uses fuzzy scales to assess risk parameters and calibration may be made by varying risk parameter values. Furthermore, the outcomes which are numerical values of risk reduction factor (the inverse of the probability of failure on demand) can be compared directly with those given by quantitative and semi-quantitative methods such as fault tree analysis (FTA), quantitative risk assessment (QRA) and layers of protection analysis (LOPA).

  10. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    Science.gov (United States)

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-10-01

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3

  11. Hybrid method based on embedded coupled simulation of vortex particles in grid based solution

    Science.gov (United States)

    Kornev, Nikolai

    2017-09-01

    The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.

  12. A New Method Based on TOPSIS and Response Surface Method for MCDM Problems with Interval Numbers

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2015-01-01

    Full Text Available As the preference of design maker (DM is always ambiguous, we have to face many multiple criteria decision-making (MCDM problems with interval numbers in our daily life. Though there have been some methods applied to solve this sort of problem, it is always complex to comprehend and sometimes difficult to implement. The calculation processes are always ineffective when a new alternative is added or removed. In view of the weakness like this, this paper presents a new method based on TOPSIS and response surface method (RSM for MCDM problems with interval numbers, RSM-TOPSIS-IN for short. The key point of this approach is the application of deviation degree matrix, which ensures that the DM can get a simple response surface (RS model to rank the alternatives. In order to demonstrate the feasibility and effectiveness of the proposed method, three illustrative MCMD problems with interval numbers are analysed, including (a selection of investment program, (b selection of a right partner, and (c assessment of road transport technologies. The contrast of ranking results shows that the RSM-TOPSIS-IN method is in good agreement with those derived by earlier researchers, indicating it is suitable to solve MCDM problems with interval numbers.

  13. Agent-Based Decentralized Control Method for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Chen, Feixiong; Chen, Minyou

    2016-01-01

    In this paper, an agent-based decentralized control model for islanded microgrids is proposed, which consists of a two-layer control structure. The bottom layer is the electrical distribution microgrid, while the top layer is the communication network composed of agents. An agent is regarded......) a systematic method is presented, which can be used to derive a set of control laws for agents from any given communication network, where only local information is needed. Furthermore, it has been seen that the output power supplied by distributed generators satisfies the load demand in the microgrid, when...

  14. Terahertz spectral unmixing based method for identifying gastric cancer

    Science.gov (United States)

    Cao, Yuqi; Huang, Pingjie; Li, Xian; Ge, Weiting; Hou, Dibo; Zhang, Guangxin

    2018-02-01

    At present, many researchers are exploring biological tissue inspection using terahertz time-domain spectroscopy (THz-TDS) techniques. In this study, based on a modified hard modeling factor analysis method, terahertz spectral unmixing was applied to investigate the relationships between the absorption spectra in THz-TDS and certain biomarkers of gastric cancer in order to systematically identify gastric cancer. A probability distribution and box plot were used to extract the distinctive peaks that indicate carcinogenesis, and the corresponding weight distributions were used to discriminate the tissue types. The results of this work indicate that terahertz techniques have the potential to detect different levels of cancer, including benign tumors and polyps.

  15. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  16. Study on torpedo fuze signal denoising method based on WPT

    Science.gov (United States)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  17. Process identification method based on the Z transformation

    International Nuclear Information System (INIS)

    Zwingelstein, G.

    1968-01-01

    A simple method is described for identifying the transfer function of a linear retard-less system, based on the inversion of the Z transformation of the transmittance using a computer. It is assumed in this study that the signals at the entrance and at the exit of the circuit considered are of the deterministic type. The study includes: the theoretical principle of the inversion of the Z transformation, details about programming simulation, and identification of filters whose degrees vary from the first to the fifth order. (authors) [fr

  18. A synthetic method of solar spectrum based on LED

    Science.gov (United States)

    Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian

    2017-10-01

    A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.

  19. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  20. Transistor-based particle detection systems and methods

    Science.gov (United States)

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  1. An Improved Spectral Background Subtraction Method Based on Wavelet Energy.

    Science.gov (United States)

    Zhao, Fengkui; Wang, Jian; Wang, Aimin

    2016-12-01

    Most spectral background subtraction methods rely on the difference in frequency response of background compared with characteristic peaks. It is difficult to extract accurately the background components from the spectrum when characteristic peaks and background have overlaps in frequency domain. An improved background estimation algorithm based on iterative wavelet transform (IWT) is presented. The wavelet entropy principle is used to select the best wavelet basis. A criterion based on wavelet energy theory to determine the optimal iteration times is proposed. The case of energy dispersive X-ray spectroscopy is discussed for illustration. A simulated spectrum with a prior known background and an experimental spectrum are tested. The processing results of the simulated spectrum is compared with non-IWT and it demonstrates the superiority of the IWT. It has great significance to improve the accuracy for spectral analysis. © The Author(s) 2016.

  2. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  3. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature......, which first applied a PLS regression to rank the features and then defined the best number of features to retain in the model by an iterative learning phase. The outliers in the dataset, that could inflate the number of selected features, were eliminated by a pre-processing step. To cope...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  4. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  5. A PBOM configuration and management method based on templates

    Science.gov (United States)

    Guo, Kai; Qiao, Lihong; Qie, Yifan

    2018-03-01

    The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.

  6. Analysis of Hylocereus spp. diversity based on phenetic method

    Science.gov (United States)

    Hamidah, Tsawab, Husnus; Rosmanida

    2017-06-01

    This study was aimed to determine number of distinguishing characters; the most dominant characters on dragonfruit (Hylocereus) classification; and dragonfruit classification relationship based on their morphological characters. Sampling was performed in Bhakti Alam Agrotourism, Pasuruan. Amount of observed parameters were 63 characters, including stem/branches segments, areolas, flower, fruit, and seeds characters. These characters were analyzed using descriptive and phenetic methods. Based on descriptive result, there were 59 distinguishing characters that affected classification of five dragonfruit species. They were white dragonfruit, pink dragonfruit, red dragonfruit, purplish-red dragonfruit, and yellow dragonfruit. Based on phenetic analysis, it was obtained a dendogram which showed the relationship of dragonfruit classification. Purplish-red and red dragonfruit were closely related with 50.7% in similarity value, which then these groups were referred as group VI. Pink dragonfruit and group VI were closely related with 43.3% in similarity value, which then these groups were referred as group IV. White dragonfruit and group IV were closely related with 21.5% in similarity value, which then these groups were referred to as group II. Meanwhile, yellow dragonfruit and group II were closely related with 8.5% in similarity value. Based on principal component analysis, there were 34 characters which influenced strongly dragonfruit classification. Two of them were the most dominant character that affected dragonfruit classification. They were curvature stem and number of fruit bracteola remnants, with component value 0,955.

  7. Energy demand forecasting method based on international statistical data

    International Nuclear Information System (INIS)

    Glanc, Z.; Kerner, A.

    1997-01-01

    Poland is in a transition phase from a centrally planned to a market economy; data collected under former economic conditions do not reflect a market economy. Final energy demand forecasts are based on the assumption that the economic transformation in Poland will gradually lead the Polish economy, technologies and modes of energy use, to the same conditions as mature market economy countries. The starting point has a significant influence on the future energy demand and supply structure: final energy consumption per capita in 1992 was almost half the average of OECD countries; energy intensity, based on Purchasing Power Parities (PPP) and referred to GDP, is more than 3 times higher in Poland. A method of final energy demand forecasting based on regression analysis is described in this paper. The input data are: output of macroeconomic and population growth forecast; time series 1970-1992 of OECD countries concerning both macroeconomic characteristics and energy consumption; and energy balance of Poland for the base year of the forecast horizon. (author). 1 ref., 19 figs, 4 tabs

  8. An Adaptive UKF Based SLAM Method for Unmanned Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2013-01-01

    Full Text Available This work proposes an improved unscented Kalman filter (UKF-based simultaneous localization and mapping (SLAM algorithm based on an adaptive unscented Kalman filter (AUKF with a noise statistic estimator. The algorithm solves the issue that conventional UKF-SLAM algorithms have declining accuracy, with divergence occurring when the prior noise statistic is unknown and time-varying. The new SLAM algorithm performs an online estimation of the statistical parameters of unknown system noise by introducing a modified Sage-Husa noise statistic estimator. The algorithm also judges whether the filter is divergent and restrains potential filtering divergence using a covariance matching method. This approach reduces state estimation error, effectively improving navigation accuracy of the SLAM system. A line feature extraction is implemented through a Hough transform based on the ranging sonar model. Test results based on unmanned underwater vehicle (UUV sea trial data indicate that the proposed AUKF-SLAM algorithm is valid and feasible and provides better accuracy than the standard UKF-SLAM system.

  9. Impact of merging methods on radar based nowcasting of rainfall

    Science.gov (United States)

    Shehu, Bora; Haberlandt, Uwe

    2017-04-01

    Radar data with high spatial and temporal resolution are commonly used to track and predict rainfall patterns that serve as input for hydrological applications. To mitigate the high errors associated with the radar, many merging methods employing ground measurements have been developed. However these methods have been investigated mainly for simulation purposes, while for nowcasting they are limited to the application of the mean field bias correction. Therefore this study aims to investigate the impact of different merging methods on the nowcasting of the rainfall volumes regarding urban floods. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of spatial and temporal filters on the predictive skill of all methods. The relevance of the radar merging techniques is demonstrated by comparing the performance of the forecasted rainfall field from the radar tracking algorithm HyRaTrac for both raw and merged radar data. For this purpose several extreme events are selected and the respective performance is evaluated by cross validation of the continuous criteria (bias and rmse) and categorical criteria (POD, FAR and GSS) for lead times up to 2 hours. The study area is located within the 128 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps for the period 2000-2012. The results reveal how the choice of merging method and the implementation of filters impacts the performance of the forecast algorithm.

  10. A novel method for EMG decomposition based on matched filters

    Directory of Open Access Journals (Sweden)

    Ailton Luiz Dias Siqueira Júnior

    Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.

  11. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  12. Accurate position estimation methods based on electrical impedance tomography measurements

    International Nuclear Information System (INIS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T A

    2017-01-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  13. An Adaptive Filtering Method Based on Crowdsourced Big Trace Data

    Directory of Open Access Journals (Sweden)

    TANG Luliang

    2016-12-01

    Full Text Available Vehicles' GPS traces collected by crowds have being as a new kind of big data and are widely applied to mine urban geographic information with low-cost, quick-update and rich-informative. However, the growing volume of vehicles' GPS traces has caused difficulties in data processing and their low quality adds uncertainty when information mining. Thus, it is a hot topic to extract high-quality GPS data from the crowdsourced traces based on the expected accuracy. In this paper, we propose an efficient partition-and-filter model to filter trajectories with expected accuracy according to the spatial feature of high-precision GPS data and the error rule of GPS data. First, the proposed partition-and-filter model to partition a trajectory into sub-trajectories based on the constrained distance and angle, which are chosen as the basic unit for the next processing step. Secondly, the proposed method collects high-quality GPS data from each sub-trajectory according to the similarity between GPS tracking points and the reference baselines constructed using random sample consensus algorithm. Experimental results demonstrate that the proposed method can effectively pick up high quality GPS data from crowdsourced trace data sets with the expected accuracy.

  14. THE FLUORBOARD A STATISTICALLY BASED DASHBOARD METHOD FOR IMPROVING SAFETY

    International Nuclear Information System (INIS)

    PREVETTE, S.S.

    2005-01-01

    The FluorBoard is a statistically based dashboard method for improving safety. Fluor Hanford has achieved significant safety improvements--including more than a 80% reduction in OSHA cases per 200,000 hours, during its work at the US Department of Energy's Hanford Site in Washington state. The massive project on the former nuclear materials production site is considered one of the largest environmental cleanup projects in the world. Fluor Hanford's safety improvements were achieved by a committed partnering of workers, managers, and statistical methodology. Safety achievements at the site have been due to a systematic approach to safety. This includes excellent cooperation between the field workers, the safety professionals, and management through OSHA Voluntary Protection Program principles. Fluor corporate values are centered around safety, and safety excellence is important for every manager in every project. In addition, Fluor Hanford has utilized a rigorous approach to using its safety statistics, based upon Dr. Shewhart's control charts, and Dr. Deming's management and quality methods

  15. An Effective Conversation-Based Botnet Detection Method

    Directory of Open Access Journals (Sweden)

    Ruidong Chen

    2017-01-01

    Full Text Available A botnet is one of the most grievous threats to network security since it can evolve into many attacks, such as Denial-of-Service (DoS, spam, and phishing. However, current detection methods are inefficient to identify unknown botnet. The high-speed network environment makes botnet detection more difficult. To solve these problems, we improve the progress of packet processing technologies such as New Application Programming Interface (NAPI and zero copy and propose an efficient quasi-real-time intrusion detection system. Our work detects botnet using supervised machine learning approach under the high-speed network environment. Our contributions are summarized as follows: (1 Build a detection framework using PF_RING for sniffing and processing network traces to extract flow features dynamically. (2 Use random forest model to extract promising conversation features. (3 Analyze the performance of different classification algorithms. The proposed method is demonstrated by well-known CTU13 dataset and nonmalicious applications. The experimental results show our conversation-based detection approach can identify botnet with higher accuracy and lower false positive rate than flow-based approach.

  16. Quality control and analytical methods for baculovirus-based products.

    Science.gov (United States)

    Roldão, António; Vicente, Tiago; Peixoto, Cristina; Carrondo, Manuel J T; Alves, Paula M

    2011-07-01

    Recombinant baculoviruses (rBac) are used for many different applications, ranging from bio-insecticides to the production of heterologous proteins, high-throughput screening of gene functions, drug delivery, in vitro assembly studies, design of antiviral drugs, bio-weapons, building blocks for electronics, biosensors and chemistry, and recently as a delivery system in gene therapy. Independent of the application, the quality, quantity and purity of rBac-based products are pre-requisites demanded by regulatory authorities for product licensing. To guarantee maximization utility, it is necessary to delineate optimized production schemes either using trial-and-error experimental setups ("brute force" approach) or rational design of experiments by aid of in silico mathematical models (Systems Biology approach). For that, one must define all of the main steps in the overall process, identify the main bioengineering issues affecting each individual step and implement, if required, accurate analytical methods for product characterization. In this review, current challenges for quality control (QC) technologies for up- and down-stream processing of rBac-based products are addressed. In addition, a collection of QC methods for monitoring/control of the production of rBac derived products are presented as well as innovative technologies for faster process optimization and more detailed product characterization. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. EP BASED PSO METHOD FOR SOLVING PROFIT BASED MULTI AREA UNIT COMMITMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    K. VENKATESAN

    2015-04-01

    Full Text Available This paper presents a new approach to solve the profit based multi area unit commitment problem (PBMAUCP using an evolutionary programming based particle swarm optimization (EPPSO method. The objective of this paper is to maximize the profit of generation companies (GENCOs with considering system social benefit. The proposed method helps GENCOs to make a decision, how much power and reserve should be sold in markets, and how to schedule generators in order to receive the maximum profit. Joint operation of generation resources can result in significant operational cost savings. Power transfer between the areas through the tie lines depends upon the operating cost of generation at each hour and tie line transfer limits. The tie line transfer limits were considered as a set of constraints during optimization process to ensure the system security and reliability. The overall algorithm can be implemented on an IBM PC, which can process a fairly large system in a reasonable period of time. Case study of four areas with different load pattern each containing 7 units (NTPS and 26 units connected via tie lines have been taken for analysis. Numerical results showed comparing the profit of evolutionary programming-based particle swarm optimization method (EPPSO with conventional dynamic programming (DP, evolutionary programming (EP, and particle swarm optimization (PSO method. Experimental results shows that the application of this evolutionary programming based particle swarm optimization method have the potential to solve profit based multi area unit commitment problem with lesser computation time.

  18. Description logic-based methods for auditing frame-based medical terminological systems.

    Science.gov (United States)

    Cornet, Ronald; Abu-Hanna, Ameen

    2005-07-01

    Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. This is not only important for the management of TSs but also for providing their users with confidence about the reliability of their contents. Formal methods have the potential to play an important role in the audit of TSs, although there are few empirical studies to assess the benefits of using these methods. In this paper we propose a method based on description logics (DLs) for the audit of TSs. This method is based on the migration of the medical TS from a frame-based representation to a DL-based one. Our method is characterized by a process in which initially stringent assumptions are made about concept definitions. The assumptions allow the detection of concepts and relations that might comprise a source of logical inconsistency. If the assumptions hold then definitions are to be altered to eliminate the inconsistency, otherwise the assumptions are revised. In order to demonstrate the utility of the approach in a real-world case study we audit a TS in the intensive care domain and discuss decisions pertaining to building DL-based representations. This case study demonstrates that certain types of inconsistencies can indeed be detected by applying the method to a medical terminological system. The added value of the method described in this paper is that it provides a means to evaluate the compliance to a number of common modeling principles in a formal manner. The proposed method reveals potential modeling inconsistencies, helping to audit and (if possible) improve the medical TS. In this way, it contributes to providing confidence in the contents of the terminological system.

  19. Molecular Phylogenetic: Organism Taxonomy Method Based on Evolution History

    Directory of Open Access Journals (Sweden)

    N.L.P Indi Dharmayanti

    2011-03-01

    Full Text Available Phylogenetic is described as taxonomy classification of an organism based on its evolution history namely its phylogeny and as a part of systematic science that has objective to determine phylogeny of organism according to its characteristic. Phylogenetic analysis from amino acid and protein usually became important area in sequence analysis. Phylogenetic analysis can be used to follow the rapid change of a species such as virus. The phylogenetic evolution tree is a two dimensional of a species graphic that shows relationship among organisms or particularly among their gene sequences. The sequence separation are referred as taxa (singular taxon that is defined as phylogenetically distinct units on the tree. The tree consists of outer branches or leaves that represents taxa and nodes and branch represent correlation among taxa. When the nucleotide sequence from two different organism are similar, they were inferred to be descended from common ancestor. There were three methods which were used in phylogenetic, namely (1 Maximum parsimony, (2 Distance, and (3 Maximum likehoood. Those methods generally are applied to construct the evolutionary tree or the best tree for determine sequence variation in group. Every method is usually used for different analysis and data.

  20. Wavelet packet-based insufficiency murmurs analysis method

    Science.gov (United States)

    Choi, Samjin; Jiang, Zhongwei

    2007-12-01

    In this paper, the aortic and mitral insufficiency murmurs analysis method using the wavelet packet technique is proposed for classifying the valvular heart defects. Considering the different frequency distributions between the normal sound and insufficiency murmurs in frequency domain, we used two properties such as the relative wavelet energy and the Shannon wavelet entropy which described the energy information and the entropy information at the selected frequency band, respectively. Then, the signal to murmur ratio (SMR) measures which could mean the ratio between the frequency bands for normal heart sounds and for aortic and mitral insufficiency murmurs allocated to 15.62-187.50 Hz and 187.50-703.12 Hz respectively, were employed as a classification manner to identify insufficiency murmurs. The proposed measures were validated by some case studies. The 194 heart sound signals with 48 normal and 146 abnormal sound cases acquired from 6 healthy volunteers and 30 patients were tested. The normal sound signals recorded by applying a self-produced wireless electric stethoscope system to subjects with no history of other heart complications were used. Insufficiency murmurs were grouped into two valvular heart defects such as aortic insufficiency and mitral insufficiency. These murmur subjects included no other coexistent valvular defects. As a result, the proposed insufficiency murmurs detection method showed relatively very high classification efficiency. Therefore, the proposed heart sound classification method based on the wavelet packet was validated for the classification of valvular heart defects, especially insufficiency murmurs.

  1. The PLR-DTW method for ECG based biometric identification.

    Science.gov (United States)

    Shen, Jun; Bao, Shu-Di; Yang, Li-Cai; Li, Ye

    2011-01-01

    There has been a surge of research on electrocardiogram (ECG) signal based biometric for person identification. Though most of the existing studies claimed that ECG signal is unique to an individual and can be a viable biometric, one of the main difficulties for real-world applications of ECG biometric is the accuracy performance. To address this problem, this study proposes a PLR-DTW method for ECG biometric, where the Piecewise Linear Representation (PLR) is used to keep important information of an ECG signal segment while reduce the data dimension at the same time if necessary, and the Dynamic Time Warping (DTW) is used for similarity measures between two signal segments. The performance evaluation was carried out on three ECG databases, and the existing method using wavelet coefficients, which was proved to have good accuracy performance, was selected for comparison. The analysis results show that the PLR-DTW method achieves an accuracy rate of 100% for identification, while the one using wavelet coefficients achieved only around 93%.

  2. Development of Cross-Assembly Phage PCR-Based Methods ...

    Science.gov (United States)

    Technologies that can characterize human fecal pollution in environmental waters offer many advantages over traditional general indicator approaches. However, many human-associated methods cross-react with non-human animal sources and lack suitable sensitivity for fecal source identification applications. The genome of a newly discovered bacteriophage (~97 kbp), the Cross-Assembly phage or “crAssphage”, assembled from a human gut metagenome DNA sequence library is predicted to be both highly abundant and predominately occur in human feces suggesting that this double stranded DNA virus may be an ideal human fecal pollution indicator. We report the development of two human-associated crAssphage endpoint PCR methods (crAss056 and crAss064). A shotgun strategy was employed where 384 candidate primers were designed to cover ~41 kbp of the crAssphage genome deemed favorable for method development based on a series of bioinformatics analyses. Candidate primers were subjected to three rounds of testing to evaluate assay optimization, specificity, limit of detection (LOD95), geographic variability, and performance in environmental water samples. The top two performing candidate primer sets exhibited 100% specificity (n = 70 individual samples from 8 different animal species), >90% sensitivity (n = 10 raw sewage samples from different geographic locations), LOD95 of 0.01 ng/µL of total DNA per reaction, and successfully detected human fecal pollution in impaired envi

  3. Method for Solving LASSO Problem Based on Multidimensional Weight

    Directory of Open Access Journals (Sweden)

    Chen ChunRong

    2017-01-01

    Full Text Available In the data mining, the analysis of high-dimensional data is a critical but thorny research topic. The LASSO (least absolute shrinkage and selection operator algorithm avoids the limitations, which generally employ stepwise regression with information criteria to choose the optimal model, existing in traditional methods. The improved-LARS (Least Angle Regression algorithm solves the LASSO effectively. This paper presents an improved-LARS algorithm, which is constructed on the basis of multidimensional weight and intends to solve the problems in LASSO. Specifically, in order to distinguish the impact of each variable in the regression, we have separately introduced part of principal component analysis (Part_PCA, Independent Weight evaluation, and CRITIC, into our proposal. We have explored that these methods supported by our proposal change the regression track by weighted every individual, to optimize the approach direction, as well as the approach variable selection. As a consequence, our proposed algorithm can yield better results in the promise direction. Furthermore, we have illustrated the excellent property of LARS algorithm based on multidimensional weight by the Pima Indians Diabetes. The experiment results show an attractive performance improvement resulting from the proposed method, compared with the improved-LARS, when they are subjected to the same threshold value.

  4. A method for MREIT-based source imaging: simulation studies

    Science.gov (United States)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  5. Emotion Recognition of Speech Signals Based on Filter Methods

    Directory of Open Access Journals (Sweden)

    Narjes Yazdanian

    2016-10-01

    Full Text Available Speech is the basic mean of communication among human beings.With the increase of transaction between human and machine, necessity of automatic dialogue and removing human factor has been considered. The aim of this study was to determine a set of affective features the speech signal is based on emotions. In this study system was designs that include three mains sections, features extraction, features selection and classification. After extraction of useful features such as, mel frequency cepstral coefficient (MFCC, linear prediction cepstral coefficients (LPC, perceptive linear prediction coefficients (PLP, ferment frequency, zero crossing rate, cepstral coefficients and pitch frequency, Mean, Jitter, Shimmer, Energy, Minimum, Maximum, Amplitude, Standard Deviation, at a later stage with filter methods such as Pearson Correlation Coefficient, t-test, relief and information gain, we came up with a method to rank and select effective features in emotion recognition. Then Result, are given to the classification system as a subset of input. In this classification stage, multi support vector machine are used to classify seven type of emotion. According to the results, that method of relief, together with multi support vector machine, has the most classification accuracy with emotion recognition rate of 93.94%.

  6. Celestial positioning method based on centroid correction of STAR trajectory

    Science.gov (United States)

    Liu, Dianjian; Zhang, Zhili; Zhou, Zhaofa; Zhao, Junyang; Liu, Xianyi

    2017-05-01

    In order to reduce the position deviations between the centroid extracted from the star points and the actual centroid and reduce the influence of coarse errors contained in the astronomical information of star calculation on astronomical positioning accuracy of the digital zenith equipment, an astronomical locating method to correct star centroid location is proposed. Based on the star trajectory equation obtained from the imaging model of the starimage pointson the image plane of the zenith equipment, the center-of-mass centroid trajectory parameters are obtained by least-square method with the information of star point centroid position extracted from the multi-frame star images shot at the same observation station, which is used to identify the theoretical centroid position and correct the original centroid position, so as to participate in the astronomical positioning solution. For the simulation star map of gaussian white noise with distribution n (0, 52), after the centroid correction, the accuracy of the longitude of the astronomical rectangle is improved by 0.058″, the latitude improvement by 0.176″ to the mostand the position of the satellite by about 5m. Experimental results show that this method has good applicability, which can improve the celestial position accuracy of digital zenith equipment effectively.

  7. DNS Tunneling Detection Method Based on Multilabel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Ahmed Almusawi

    2018-01-01

    Full Text Available DNS tunneling is a method used by malicious users who intend to bypass the firewall to send or receive commands and data. This has a significant impact on revealing or releasing classified information. Several researchers have examined the use of machine learning in terms of detecting DNS tunneling. However, these studies have treated the problem of DNS tunneling as a binary classification where the class label is either legitimate or tunnel. In fact, there are different types of DNS tunneling such as FTP-DNS tunneling, HTTP-DNS tunneling, HTTPS-DNS tunneling, and POP3-DNS tunneling. Therefore, there is a vital demand to not only detect the DNS tunneling but rather classify such tunnel. This study aims to propose a multilabel support vector machine in order to detect and classify the DNS tunneling. The proposed method has been evaluated using a benchmark dataset that contains numerous DNS queries and is compared with a multilabel Bayesian classifier based on the number of corrected classified DNS tunneling instances. Experimental results demonstrate the efficacy of the proposed SVM classification method by obtaining an f-measure of 0.80.

  8. An Automata Based Intrusion Detection Method for Internet of Things

    Directory of Open Access Journals (Sweden)

    Yulong Fu

    2017-01-01

    Full Text Available Internet of Things (IoT transforms network communication to Machine-to-Machine (M2M basis and provides open access and new services to citizens and companies. It extends the border of Internet and will be developed as one part of the future 5G networks. However, as the resources of IoT’s front devices are constrained, many security mechanisms are hard to be implemented to protect the IoT networks. Intrusion detection system (IDS is an efficient technique that can be used to detect the attackers when cryptography is broken, and it can be used to enforce the security of IoT networks. In this article, we analyzed the intrusion detection requirements of IoT networks and then proposed a uniform intrusion detection method for the vast heterogeneous IoT networks based on an automata model. The proposed method can detect and report the possible IoT attacks with three types: jam-attack, false-attack, and reply-attack automatically. We also design an experiment to verify the proposed IDS method and examine the attack of RADIUS application.

  9. A seismic fault recognition method based on ant colony optimization

    Science.gov (United States)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  10. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  11. Selection of Construction Methods: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Ximena Ferrada

    2013-01-01

    Full Text Available The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method’ selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods’ selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  12. Reevaluation of nonvolcanic tremor activity based on the hybrid method

    Science.gov (United States)

    Obara, K.; Tanaka, S.; Maeda, T.

    2009-12-01

    The nonvolcanic tremor in southwest Japan occurs with a regular recurrence interval accompanying the short-term slow slip event and very-low-frequency earthquake. The source location of tremors has been determined by the envelope correlation method (ECM) based on the measurement of time differences of coherent envelope seismogram observed at neighbor stations (Obara, 2002). However, ECM has a problem in the spatial resolution of the tremor location and evaluation of radiation energy. During very active tremor stage, the ECM sometimes fails to detect tremor due to complicated envelopes caused by multiple sources. In order to solve the problem, Maeda and Obara (2009) developed a new method based on the ECM by adding the spatial distribution of envelope amplitude. The hybrid method enables to detect not only weak amplitude tremors but also complicated large amplitude tremors and to evaluate the tremor radiation energy quantitatively. The depth of the tremor is assumed to be on the plate interface derived from the receiver function analysis (Shiomi et al., 2008). The automatic process of the hybrid method is carried out every one minute. Then, in order to remove the source of the regular earthquake, we calculate the centroid position for closely located tremors and summed the radiation energy for every one hour. We relocated the source of tremors from 2001 to present. The tremor is distributed in some clusters in the belt-like zone along the strike of the subducting Philippine Sea plate. The hybrid method revealed the minor tremor activity outside the belt-like zone. In the northeastern Shikoku area, isolated tremor activity is located 30 km away from the belt-like zone and occur frequently at every one or two months. The most distinctive feature is the double peak alignments of tremor activity at the shallow and deep part of the tremor zone in western Shikoku, northern Kii and Tokai regions. These alignments may define the width of tremor active zone. In these regions

  13. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    Directory of Open Access Journals (Sweden)

    Henrik Karstoft

    2012-03-01

    Full Text Available Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis. The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs, which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision and a reasonable recognition of flushing (79–86%, 66–80% and landing behaviour(73–91%, 79–92%. The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linearcapabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of awildlife management system.

  14. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  15. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering.

    Science.gov (United States)

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research.

  16. Method of estimation of cloud base height using ground-based digital stereophotography

    Science.gov (United States)

    Chulichkov, Alexey I.; Andreev, Maksim S.; Emilenko, Aleksandr S.; Ivanov, Victor A.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2015-11-01

    Errors of the retrieval of the atmospheric composition using optical methods (DOAS et al.) are under the determining influence of the cloudiness during the measurements. Information on cloud characteristics helps to adjust the optical model of the atmosphere used to interpret the measurements and to reduce the retrieval errors are. For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The report describes a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras, describes method of searching of cloud fragments at another frame by the morphological image analysis; the problem of estimating the cloud base height is formulated and solved. Theoretical investigation shows that for the stereo base of 60 m and shooting with a resolution of 1600x1200 pixels in field of view of 60° the errors do not exceed 10% for the cloud base height up to 4 km. Optimization of camera settings can farther improve the accuracy. Available for authors experimental setup with the stereo base of 17 m and a resolution of 640x480 pixels preliminary confirmed theoretical estimations of the accuracy in comparison with laser rangefinder.

  17. Study of Biometric Identification Method Based on Naked Footprint

    Directory of Open Access Journals (Sweden)

    Raji Rafiu King

    2013-10-01

    Full Text Available The scale of deployment of biometric identity-verification systems has recently seen an enormous increase owing to the need for more secure and reliable way of identifying people. Footprint identification which can be defined as the measurement of footprint features for recognizing the identity of a user has surfaced recently. This study is based on a biometric personal identification method using static footprint features viz. friction ridge / texture and foot shape / silhouette. To begin with, naked footprints of users are captured; images then undergo pre processing followed by the extraction of two features; shape using Gradient Vector Flow (GVF) snake model and minutiae extraction respectively. Matching is then effected based on these two features followed by a fusion of these two results for either a reject or accept decision. Our shape matching feature is based on cosine similarity while the texture one is based on miniature score matching. The results from our research establish that the naked footprint is a credible biometric feature as two barefoot impressions of an individual match perfectly while that of two different persons shows a great deal of dissimilarity. Normal 0 false false false IN X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Doi: 10.12777/ijse.5.2.29-35 How to cite this article: King

  18. Technology Credit Scoring Based on a Quantification Method

    Directory of Open Access Journals (Sweden)

    Yonghan Ju

    2017-06-01

    Full Text Available Credit scoring models are usually formulated by fitting the probability of loan default as a function of individual evaluation attributes. Typically, these attributes are measured using a Likert-type scale, but are treated as interval scale explanatory variables to predict loan defaults. Existing models also do not distinguish between types of default, although they vary: default by an insolvent company and default by an insolvent debtor. This practice can bias the results. In this paper, we applied Quantification Method II, a categorical version of canonical correlation analysis, to determine the relationship between two sets of categorical variables: a set of default types and a set of evaluation attributes. We distinguished between two types of loan default patterns based on quantification scores. In the first set of quantification scores, we found knowledge management, new technology development, and venture registration as important predictors of default from non-default status. Based on the second quantification score, we found that the technology and profitability factors influence loan defaults due to an insolvent company. Finally, we proposed a credit-risk rating model based on the quantification score.

  19. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method.

    Science.gov (United States)

    Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko

    2011-09-30

    Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  1. Physics-Based Imaging Methods for Terahertz Nondestructive Evaluation Applications

    Science.gov (United States)

    Kniffin, Gabriel Paul

    Lying between the microwave and far infrared (IR) regions, the "terahertz gap" is a relatively unexplored frequency band in the electromagnetic spectrum that exhibits a unique combination of properties from its neighbors. Like in IR, many materials have characteristic absorption spectra in the terahertz (THz) band, facilitating the spectroscopic "fingerprinting" of compounds such as drugs and explosives. In addition, non-polar dielectric materials such as clothing, paper, and plastic are transparent to THz, just as they are to microwaves and millimeter waves. These factors, combined with sub-millimeter wavelengths and non-ionizing energy levels, makes sensing in the THz band uniquely suited for many NDE applications. In a typical nondestructive test, the objective is to detect a feature of interest within the object and provide an accurate estimate of some geometrical property of the feature. Notable examples include the thickness of a pharmaceutical tablet coating layer or the 3D location, size, and shape of a flaw or defect in an integrated circuit. While the material properties of the object under test are often tightly controlled and are generally known a priori, many objects of interest exhibit irregular surface topographies such as varying degrees of curvature over the extent of their surfaces. Common THz pulsed imaging (TPI) methods originally developed for objects with planar surfaces have been adapted for objects with curved surfaces through use of mechanical scanning procedures in which measurements are taken at normal incidence over the extent of the surface. While effective, these methods often require expensive robotic arm assemblies, the cost and complexity of which would likely be prohibitive should a large volume of tests be needed to be carried out on a production line. This work presents a robust and efficient physics-based image processing approach based on the mature field of parabolic equation methods, common to undersea acoustics, seismology

  2. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    International Nuclear Information System (INIS)

    Blanford, E.; Keldrauk, E.; Laufer, M.; Mieler, M.; Wei, J.; Stojadinovic, B.; Peterson, P.F.

    2010-01-01

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  3. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  4. Diagnosis methods based on noise analysis at Cernavoda NPP, Romania

    International Nuclear Information System (INIS)

    Banica, Constantin; Dobrea, Dumitru

    1999-01-01

    This paper describes a recent noise analysis of the neutronic signals provided by in-core flux detectors (ICFD) and ion chambers (IC). This analysis is part of on-going program developed for Unit 1 of the Cernavoda NPP, Romania, with the following main objectives: - prediction of detector failures based on pattern recognition; - determination of fast excursions from steady states; - detection of abnormal mechanical vibrations in the reactor core. The introduction presents briefly the reactor, the location of ICFD's and IC's. The second section presents the data acquisition systems and their capabilities. The paper continues with a brief presentation of the numerical methods used for analysis (section 3). The most significant results can be found in section 4, while section 5 concludes about useful information that can be obtained from the neutronic signals at high power steady-state operation. (authors)

  5. Arts-based methods for storylistening and storytelling with prisoners

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    - there is no “final word” (Bakhtin 1981). Storytelling and issues of listening are exemplified through written texts produced by young prisoners, and subsequent reflexive narrative interviews I conducted at a Danish prison for youth. The texts are poetry and prose from a collaborate creative writing workshop......, Wordquake in Prison. The texts were published in an edited book (Frølunde, Søgaard, and Weise 2016). The analysis of texts and reflexive narrative interviews is inspired by arts-based, dialogic, narrative methods on the arts and storytelling (Cole and Knowles 2008; Reiter 2014; Boje 2001), storylistening...... in narrative medicine (DasGupta 2014), and aesthetic reflection on artistic expression in arts therapy and education. In my analysis, I explore active listening as in terms of reflection and revision of stories with the young prisoners. I reflect on the tensions involved in listening in a sensitive prison...

  6. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  7. Hydrodynamic analysis of human swimming based on VOF method.

    Science.gov (United States)

    Zhan, Jie-Min; Li, Tian-Zeng; Chen, Xue-Bin; Li, Y S

    2017-05-01

    A 3-D numerical model, based on the Navier-Strokes equations and the RNG k-ε turbulence closure, for studying hydrodynamic drag on a swimmer with wave-making resistance taken into account is established. The volume of fluid method is employed to capture the undulation of the free surface. The simulation strategy is evaluated by comparison of the computed results with experimental data. The computed results are in good agreement with data from mannequin towing experiments. The effects of the swimmer's head position and gliding depth on the drag force at different velocities are then investigated. It is found that keeping the head aligned with the body is the optimal posture in streamlined gliding. Also wave-making resistance is significant within 0.3 m depth from the free surface.

  8. A Comparison of Moments-Based Logo Recognition Methods

    Directory of Open Access Journals (Sweden)

    Zili Zhang

    2014-01-01

    Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.

  9. Dealing with defaulting suppliers using behavioral based governance methods

    DEFF Research Database (Denmark)

    Prosman, Ernst Johannes; Scholten, Kirstin; Power, Damien

    2016-01-01

    Purpose: The aim of this paper is to explore factors influencing the effectiveness of buyer initiated Behavioral Based Governance Methods (BBGMs). The ability of BBGMs to improve supplier performance is assessed considering power imbalances and the resource intensiveness of the BBGM. Agency Theory...... is used as an interpretive lens. Design/methodology/approach: An explorative multiple case study approach is used to collect qualitative and quantitative data from buying companies involved in 13 BBGMs. Findings: Drawing on agency theory several factors are identified which can explain BBGM effectiveness....../value: This study develops a series of propositions indicating that Agency Theory can provide valuable guidance on how to better understand the effectiveness of BBGMs. Underlying mechanisms are identified that explain how power imbalances do not necessarily make improvement initiatives unsuccessful....

  10. A method for establishing integrity in software-based systems

    International Nuclear Information System (INIS)

    Staple, B.D.; Berg, R.S.; Dalton, L.J.

    1997-01-01

    In this paper, the authors present a digital system requirements specification method that has demonstrated a potential for improving the completeness of requirements while reducing ambiguity. It assists with making proper digital system design decisions, including the defense against specific digital system failures modes. It also helps define the technical rationale for all of the component and interface requirements. This approach is a procedural method that abstracts key features that are expanded in a partitioning that identifies and characterizes hazards and safety system function requirements. The key system features are subjected to a hierarchy that progressively defines their detailed characteristics and components. This process produces a set of requirements specifications for the system and all of its components. Based on application to nuclear power plants, the approach described here uses two ordered domains: plant safety followed by safety system integrity. Plant safety refers to those systems defined to meet the safety goals for the protection of the public. Safety system integrity refers to systems defined to ensure that the system can meet the safety goals. Within each domain, a systematic process is used to identify hazards and define the corresponding means of defense and mitigation. In both domains, the approach and structure are focused on the completeness of information and eliminating ambiguities in the generation of safety system requirements that will achieve the plant safety goals

  11. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  12. Bacteria counting method based on polyaniline/bacteria thin film.

    Science.gov (United States)

    Zhihua, Li; Xuetao, Hu; Jiyong, Shi; Xiaobo, Zou; Xiaowei, Huang; Xucheng, Zhou; Tahir, Haroon Elrasheid; Holmes, Mel; Povey, Malcolm

    2016-07-15

    A simple and rapid bacteria counting method based on polyaniline (PANI)/bacteria thin film was proposed. Since the negative effects of immobilized bacteria on the deposition of PANI on glass carbon electrode (GCE), PANI/bacteria thin films containing decreased amount of PANI would be obtained when increasing the bacteria concentration. The prepared PANI/bacteria film was characterized with cyclic voltammetry (CV) technique to provide quantitative index for the determination of the bacteria count, and electrochemical impedance spectroscopy (EIS) was also performed to further investigate the difference in the PANI/bacteria films. Good linear relationship of the peak currents of the CVs and the log total count of bacteria (Bacillus subtilis) could be established using the equation Y=-30.413X+272.560 (R(2)=0.982) over the range of 5.3×10(4) to 5.3×10(8)CFUmL(-1), which also showed acceptable stability, reproducibility and switchable ability. The proposed method was feasible for simple and rapid counting of bacteria. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. A drainage data-based calculation method for coalbed permeability

    International Nuclear Information System (INIS)

    Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao

    2013-01-01

    This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)

  14. a Mapping Method of Slam Based on Look up Table

    Science.gov (United States)

    Wang, Z.; Li, J.; Wang, A.; Wang, J.

    2017-09-01

    In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.

  15. Task-Based Method for Designing Underactuated Mechanisms

    Directory of Open Access Journals (Sweden)

    Shoichiro Kamada

    2012-03-01

    Full Text Available In this paper we introduce a task-based method for designing underactuated multi-joint prosthetic hands for specific grasping tasks. The designed robotic hands or prosthetic hands contain fewer independent actuators than joints. We chose a few specific grasping tasks that are frequently repeated in everyday life and analysed joint motions of the hand during the completion of each task and the level of participation of each joint. The information was used for the synthesis of dedicated underactuated mechanisms that can operate in a low dimensional task coordinate space. We propose two methods for reducing the actuators' number. The kinematic parameters of the synthesized mechanism are determined by using a numerical approach. In this study the joint angles of the synthesized hand are considered as linearly dependent on the displacements of the actuators. We introduced a special error index that allowed us to compare the original trajectory and the trajectory performed by the synthesized mechanism, and to select the kinematic parameters of the new kinematic structure as a way to reduce the error. The approach allows the design of simple gripper mechanisms with good accuracy for the preliminary defined tasks.

  16. A random network based, node attraction facilitated network evolution method

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2016-03-01

    Full Text Available In present study, I present a method of network evolution that based on random network, and facilitated by node attraction. In this method, I assume that the initial network is a random network, or a given initial network. When a node is ready to connect, it tends to link to the node already owning the most connections, which coincides with the general rule (Barabasi and Albert, 1999 of node connecting. In addition, a node may randomly disconnect a connection i.e., the addition of connections in the network is accompanied by the pruning of some connections. The dynamics of network evolution is determined of the attraction factor Lamda of nodes, the probability of node connection, the probability of node disconnection, and the expected initial connectance. The attraction factor of nodes, the probability of node connection, and the probability of node disconnection are time and node varying. Various dynamics can be achieved by adjusting these parameters. Effects of simplified parameters on network evolution are analyzed. The changes of attraction factor Lamda can reflect various effects of the node degree on connection mechanism. Even the changes of Lamda only will generate various networks from the random to the complex. Therefore, the present algorithm can be treated as a general model for network evolution. Modeling results show that to generate a power-law type of network, the likelihood of a node attracting connections is dependent upon the power function of the node's degree with a higher-order power. Matlab codes for simplified version of the method are provided.

  17. The Principle-Based Method of Practical Ethics.

    Science.gov (United States)

    Spielthenner, Georg

    2017-09-01

    This paper is about the methodology of doing practical ethics. There is a variety of methods employed in ethics. One of them is the principle-based approach, which has an established place in ethical reasoning. In everyday life, we often judge the rightness and wrongness of actions by their conformity to principles, and the appeal to principles plays a significant role in practical ethics, too. In this paper, I try to provide a better understanding of the nature of principle-based reasoning. To accomplish this, I show in the first section that these principles can be applied to cases in a meaningful and sufficiently precise way. The second section discusses the question how relevant applying principles is to the resolution of ethical issues. This depends on their nature. I argue that the principles under consideration in this paper should be interpreted as presumptive principles and I conclude that although they cannot be expected to bear the weight of definitely resolving ethical problems, these principles can nevertheless play a considerable role in ethical research.

  18. Geomorphometry-based method of landform assessment for geodiversity

    Science.gov (United States)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  19. Accuracy of structure-based sequence alignment of automatic methods

    Directory of Open Access Journals (Sweden)

    Lee Byungkook

    2007-09-01

    similarity is low, structure-based methods produce better sequence alignments than by using sequence similarities alone. However, current structure-based methods still mis-align 11–19% of the conserved core residues when compared to the human-curated CDD alignments. The alignment quality of each program depends on the protein structural type and similarity, with DaliLite showing the most agreement with CDD on average.

  20. Data Bases in Writing: Method, Practice, and Metaphor.

    Science.gov (United States)

    Schwartz, Helen J.

    1985-01-01

    Points out the need for informed and experienced users of data bases. Discusses the definition of a data base, creating a data base for research, comparison use, and checking written text as a data base. (EL)

  1. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    DEFF Research Database (Denmark)

    Bohlin, Jon; Snipen, Lars; Cloeckaert, Axel

    2010-01-01

    in marker genes on the other. The proteome based methods found greater similarity between Brucella species and Ochrobactrum species than between species within genus Agrobacterium compared to each other. In other words, proteome comparisons of species within genus Agrobacterium were found to be more diverse...

  2. Iterative support detection-based split Bregman method for wavelet frame-based image inpainting.

    Science.gov (United States)

    He, Liangtian; Wang, Yilun

    2014-12-01

    The wavelet frame systems have been extensively studied due to their capability of sparsely approximating piece-wise smooth functions, such as images, and the corresponding wavelet frame-based image restoration models are mostly based on the penalization of the l1 norm of wavelet frame coefficients for sparsity enforcement. In this paper, we focus on the image inpainting problem based on the wavelet frame, propose a weighted sparse restoration model, and develop a corresponding efficient algorithm. The new algorithm combines the idea of iterative support detection method, first proposed by Wang and Yin for sparse signal reconstruction, and the split Bregman method for wavelet frame l1 model of image inpainting, and more important, naturally makes use of the specific multilevel structure of the wavelet frame coefficients to enhance the recovery quality. This new algorithm can be considered as the incorporation of prior structural information of the wavelet frame coefficients into the traditional l1 model. Our numerical experiments show that the proposed method is superior to the original split Bregman method for wavelet frame-based l1 norm image inpainting model as well as some typical l(p) (0 ≤ p wavelet frame coefficients.

  3. Passive ranging using a filter-based non-imaging method based on oxygen absorption.

    Science.gov (United States)

    Yu, Hao; Liu, Bingqi; Yan, Zongqun; Zhang, Yu

    2017-10-01

    To solve the problem of poor real-time measurement caused by a hyperspectral imaging system and to simplify the design in passive ranging technology based on oxygen absorption spectrum, a filter-based non-imaging ranging method is proposed. In this method, three bandpass filters are used to obtain the source radiation intensities that are located in the oxygen absorption band near 762 nm and the band's left and right non-absorption shoulders, and a photomultiplier tube is used as the non-imaging sensor of the passive ranging system. Range is estimated by comparing the calculated values of band-average transmission due to oxygen absorption, τ O 2 , against the predicted curve of τ O 2 versus range. The method is tested under short-range conditions. Accuracy of 6.5% is achieved with the designed experimental ranging system at the range of 400 m.

  4. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    Science.gov (United States)

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  5. An Intelligent Fleet Condition-Based Maintenance Decision Making Method Based on Multi-Agent

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2012-01-01

    Full Text Available According to the demand for condition-based maintenance online decision making among a mission oriented fleet, an intelligent maintenance decision making method based on Multi-agent and heuristic rules is proposed. The process of condition-based maintenance within an aircraft fleet (each containing one or more Line Replaceable Modules based on multiple maintenance thresholds is analyzed. Then the process is abstracted into a Multi-Agent Model, a 2-layer model structure containing host negotiation and independent negotiation is established, and the heuristic rules applied to global and local maintenance decision making is proposed. Based on Contract Net Protocol and the heuristic rules, the maintenance decision making algorithm is put forward. Finally, a fleet consisting of 10 aircrafts on a 3-wave continuous mission is illustrated to verify this method. Simulation results indicate that this method can improve the availability of the fleet, meet mission demands, rationalize the utilization of support resources and provide support for online maintenance decision making among a mission oriented fleet.

  6. Social network extraction based on Web: 1. Related superficial methods

    Science.gov (United States)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  7. Rough Precipitation Forecasts based on Analogue Method: an Operational System

    Science.gov (United States)

    Raffa, Mario; Mercogliano, Paola; Lacressonnière, Gwendoline; Guillaume, Bruno; Deandreis, Céline; Castanier, Pierre

    2017-04-01

    In the framework of the Climate KIC partnership, has been funded the project Wat-Ener-Cast (WEC), coordinated by ARIA Technologies, having the goal to adapt, through tailored weather-related forecast, the water and energy operations to the increased weather fluctuation and to climate change. The WEC products allow providing high quality forecast suited in risk and opportunities assessment dashboard for water and energy operational decisions and addressing the needs of sewage/water distribution operators, energy transport & distribution system operators, energy manager and wind energy producers. A common "energy water" web platform, able to interface with newest smart water-energy IT network have been developed. The main benefit by sharing resources through the "WEC platform" is the possibility to optimize the cost and the procedures of safety and maintenance team, in case of alerts and, finally to reduce overflows. Among the different services implemented on the WEC platform, ARIA have developed a product having the goal to support sewage/water distribution operators, based on a gradual forecast information system ( at 48hrs/24hrs/12hrs horizons) of heavy precipitation. For each fixed deadline different type of operation are implemented: 1) 48hour horizon, organisation of "on call team", 2) 24 hour horizon, update and confirm the "on call team", 3) 12 hour horizon, secure human resources and equipment (emptying storage basins, pipes manipulations …). More specifically CMCC have provided a statistical downscaling method in order to provide a "rough" daily local precipitation at 24 hours, especially when high precipitation values are expected. This statistical technique consists of an adaptation of analogue method based on ECMWF data (analysis and forecast at 24 hours). One of the most advantages of this technique concerns a lower computational burden and budget compared to running a Numerical Weather Prediction (NWP) model, also if, of course it provides only this

  8. Method of Heating a Foam-Based Catalyst Bed

    Science.gov (United States)

    Fortini, Arthur J.; Williams, Brian E.; McNeal, Shawn R.

    2009-01-01

    A method of heating a foam-based catalyst bed has been developed using silicon carbide as the catalyst support due to its readily accessible, high surface area that is oxidation-resistant and is electrically conductive. The foam support may be resistively heated by passing an electric current through it. This allows the catalyst bed to be heated directly, requiring less power to reach the desired temperature more quickly. Designed for heterogeneous catalysis, the method can be used by the petrochemical, chemical processing, and power-generating industries, as well as automotive catalytic converters. Catalyst beds must be heated to a light-off temperature before they catalyze the desired reactions. This typically is done by heating the assembly that contains the catalyst bed, which results in much of the power being wasted and/or lost to the surrounding environment. The catalyst bed is heated indirectly, thus requiring excessive power. With the electrically heated catalyst bed, virtually all of the power is used to heat the support, and only a small fraction is lost to the surroundings. Although the light-off temperature of most catalysts is only a few hundred degrees Celsius, the electrically heated foam is able to achieve temperatures of 1,200 C. Lower temperatures are achievable by supplying less electrical power to the foam. Furthermore, because of the foam s open-cell structure, the catalyst can be applied either directly to the foam ligaments or in the form of a catalyst- containing washcoat. This innovation would be very useful for heterogeneous catalysis where elevated temperatures are needed to drive the reaction.

  9. Scientific and methodical bases of transformation of agricultural land use

    Directory of Open Access Journals (Sweden)

    Kolisnyk H.

    2016-05-01

    Full Text Available Formed scientific and methodical bases of transformation of agricultural land use. Determined essence of the concept of «transformation», «tranformation of agricultural land use», «evaluation of transformation». Thus, in agricultural land use should be understood natural and territorial complex, in which occurs interaction of economic agent with the environment, with the result that forming certain social product (agricultural products, and transformation of agricultural land use – a process of transformation of land use of present social and economic system in a new land use, which is characterized by changing the previous characteristics and properties of land use to a new. At the same time, an important feature of the transformation is withering away of the previous system of economic relations. To determine, if effective occured transformation of agricultural land use, is necessary to conduct environmental and economic evaluation of the transformation of agricultural land use, where ecological and economic evaluation – evaluation of changes of qualitative and quantitative characteristics of agricultural land use as a result of type of social and economic relations and forms of land ownership. In the study suggested structural and logical scheme of transformation of agricultural land use based on formed concepts and statements. Determines the prerequisites of transformation of agricultural land use, based on private ownership of land, so to the main belongs changing of the type of social and economic relations, forms of ownership of land and means of production, the poor state of the environment. As a result of transformation of agricultural land use at market conditions, occured transition from land use of state and collective farms to agricultural formations of market type, зbased on different forms of ownership In the study highlighted the stages of transformation and classification of types of transformation of agricultural

  10. Efficient parsimony-based methods for phylogenetic network reconstruction.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-15

    Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.

  11. A prediction method based on grey system theory in equipment condition based maintenance

    International Nuclear Information System (INIS)

    Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming

    2007-01-01

    Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)

  12. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  13. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  14. Gene-based segregation method for identifying rare variants in family-based sequencing studies.

    Science.gov (United States)

    Qiao, Dandi; Lange, Christoph; Laird, Nan M; Won, Sungho; Hersh, Craig P; Morrow, Jarrett; Hobbs, Brian D; Lutz, Sharon M; Ruczinski, Ingo; Beaty, Terri H; Silverman, Edwin K; Cho, Michael H

    2017-05-01

    Whole-exome sequencing using family data has identified rare coding variants in Mendelian diseases or complex diseases with Mendelian subtypes, using filters based on variant novelty, functionality, and segregation with the phenotype within families. However, formal statistical approaches are limited. We propose a gene-based segregation test (GESE) that quantifies the uncertainty of the filtering approach. It is constructed using the probability of segregation events under the null hypothesis of Mendelian transmission. This test takes into account different degrees of relatedness in families, the number of functional rare variants in the gene, and their minor allele frequencies in the corresponding population. In addition, a weighted version of this test allows incorporating additional subject phenotypes to improve statistical power. We show via simulations that the GESE and weighted GESE tests maintain appropriate type I error rate, and have greater power than several commonly used region-based methods. We apply our method to whole-exome sequencing data from 49 extended pedigrees with severe, early-onset chronic obstructive pulmonary disease (COPD) in the Boston Early-Onset COPD study (BEOCOPD) and identify several promising candidate genes. Our proposed methods show great potential for identifying rare coding variants of large effect and high penetrance for family-based sequencing data. The proposed tests are implemented in an R package that is available on CRAN (https://cran.r-project.org/web/packages/GESE/). © 2017 WILEY PERIODICALS, INC.

  15. Evaluation of medical students of teacher-based and student-based teaching methods in Infectious diseases course.

    Science.gov (United States)

    Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F

    2015-01-01

    Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.

  16. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  17. Residual Stress Analysis Based on Acoustic and Optical Methods

    Directory of Open Access Journals (Sweden)

    Sanichiro Yoshida

    2016-02-01

    Full Text Available Co-application of acoustoelasticity and optical interferometry to residual stress analysis is discussed. The underlying idea is to combine the advantages of both methods. Acoustoelasticity is capable of evaluating a residual stress absolutely but it is a single point measurement. Optical interferometry is able to measure deformation yielding two-dimensional, full-field data, but it is not suitable for absolute evaluation of residual stresses. By theoretically relating the deformation data to residual stresses, and calibrating it with absolute residual stress evaluated at a reference point, it is possible to measure residual stresses quantitatively, nondestructively and two-dimensionally. The feasibility of the idea has been tested with a butt-jointed dissimilar plate specimen. A steel plate 18.5 mm wide, 50 mm long and 3.37 mm thick is braze-jointed to a cemented carbide plate of the same dimension along the 18.5 mm-side. Acoustoelasticity evaluates the elastic modulus at reference points via acoustic velocity measurement. A tensile load is applied to the specimen at a constant pulling rate in a stress range substantially lower than the yield stress. Optical interferometry measures the resulting acceleration field. Based on the theory of harmonic oscillation, the acceleration field is correlated to compressive and tensile residual stresses qualitatively. The acoustic and optical results show reasonable agreement in the compressive and tensile residual stresses, indicating the feasibility of the idea.

  18. Opinion data mining based on DNA method and ORA software

    Science.gov (United States)

    Tian, Ru-Ya; Wu, Lei; Liang, Xiao-He; Zhang, Xue-Fu

    2018-01-01

    Public opinion, especially the online public opinion is a critical issue when it comes to mining its characteristics. Because it can be formed directly and intensely in a short time, and may lead to the outbreak of online group events, and the formation of online public opinion crisis. This may become the pushing hand of a public crisis event, or even have negative social impacts, which brings great challenges to the government management. Data from the mass media which reveal implicit, previously unknown, and potentially valuable information, can effectively help us to understand the evolution law of public opinion, and provide a useful reference for rumor intervention. Based on the Dynamic Network Analysis method, this paper uses ORA software to mine characteristics of public opinion information, opinion topics, and public opinion agents through a series of indicators, and quantitatively analyzed the relationships between them. The results show that through the analysis of the 8 indexes associating with opinion data mining, we can have a basic understanding of the public opinion characteristics of an opinion event, such as who is important in the opinion spreading process, the information grasping condition, and the opinion topics release situation.

  19. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  20. Perovskite-Based Solar Cells: Materials, Methods, and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Di Zhou

    2018-01-01

    Full Text Available A novel all-solid-state, hybrid solar cell based on organic-inorganic metal halide perovskite (CH3NH3PbX3 materials has attracted great attention from the researchers all over the world and is considered to be one of the top 10 scientific breakthroughs in 2013. The perovskite materials can be used not only as light-absorbing layer, but also as an electron/hole transport layer due to the advantages of its high extinction coefficient, high charge mobility, long carrier lifetime, and long carrier diffusion distance. The photoelectric power conversion efficiency of the perovskite solar cells has increased from 3.8% in 2009 to 22.1% in 2016, making perovskite solar cells the best potential candidate for the new generation of solar cells to replace traditional silicon solar cells in the future. In this paper, we introduce the development and mechanism of perovskite solar cells, describe the specific function of each layer, and focus on the improvement in the function of such layers and its influence on the cell performance. Next, the synthesis methods of the perovskite light-absorbing layer and the performance characteristics are discussed. Finally, the challenges and prospects for the development of perovskite solar cells are also briefly presented.

  1. Quantitative Method for Network Security Situation Based on Attack Prediction

    Directory of Open Access Journals (Sweden)

    Hao Hu

    2017-01-01

    Full Text Available Multistep attack prediction and security situation awareness are two big challenges for network administrators because future is generally unknown. In recent years, many investigations have been made. However, they are not sufficient. To improve the comprehensiveness of prediction, in this paper, we quantitatively convert attack threat into security situation. Actually, two algorithms are proposed, namely, attack prediction algorithm using dynamic Bayesian attack graph and security situation quantification algorithm based on attack prediction. The first algorithm aims to provide more abundant information of future attack behaviors by simulating incremental network penetration. Through timely evaluating the attack capacity of intruder and defense strategies of defender, the likely attack goal, path, and probability and time-cost are predicted dynamically along with the ongoing security events. Furthermore, in combination with the common vulnerability scoring system (CVSS metric and network assets information, the second algorithm quantifies the concealed attack threat into the surfaced security risk from two levels: host and network. Examples show that our method is feasible and flexible for the attack-defense adversarial network environment, which benefits the administrator to infer the security situation in advance and prerepair the critical compromised hosts to maintain normal network communication.

  2. Test results judgment method based on BIT faults

    Directory of Open Access Journals (Sweden)

    Wang Gang

    2015-12-01

    Full Text Available Built-in-test (BIT is responsible for equipment fault detection, so the test data correctness directly influences diagnosis results. Equipment suffers all kinds of environment stresses, such as temperature, vibration, and electromagnetic stress. As embedded testing facility, BIT also suffers from these stresses and the interferences/faults are caused, so that the test course is influenced, resulting in incredible results. Therefore it is necessary to monitor test data and judge test failures. Stress monitor and BIT self-diagnosis would redound to BIT reliability, but the existing anti-jamming researches are mainly safeguard design and signal process. This paper focuses on test results monitor and BIT equipment (BITE failure judge, and a series of improved approaches is proposed. Firstly the stress influences on components are illustrated and the effects on the diagnosis results are summarized. Secondly a composite BIT program is proposed with information integration, and a stress monitor program is given. Thirdly, based on the detailed analysis of system faults and forms of BIT results, the test sequence control method is proposed. It assists BITE failure judge and reduces error probability. Finally the validation cases prove that these approaches enhance credibility.

  3. Community Mining Method of Label Propagation Based on Dense Pairs

    Directory of Open Access Journals (Sweden)

    WENG Wei

    2014-03-01

    Full Text Available In recent years, with the popularity of handheld Internet equipments like mobile phones, increasing numbers of people are becoming involved in the virtual social network. Because of its large amount of data and complex structure, the network faces new challenges of community mining. A label propagation algorithm with low time complexity and without prior parameters deals easily with a large networks. This study explored a new method of community mining, based on label propagation with two stages. The first stage involved identifying closely linked nodes according to their local adjacency relations that gave rise to a micro-community. The second stage involved expanding and adjusting this community through a label propagation algorithm (LPA to finally obtain the community structure of the entire social network. This algorithm reduced the number of initial labels and avoided the merging of small communities in general LPAs. Thus, the quality of community discovery was improved, and the linear time complexity of the LPA was maintained.

  4. SPRi-based adenovirus detection using a surrogate antibody method.

    Science.gov (United States)

    Abadian, Pegah N; Yildirim, Nimet; Gu, April Z; Goluch, Edgar D

    2015-12-15

    Adenovirus infection, which is a waterborne viral disease, is one of the most prevelant causes of human morbidity in the world. Thus, methods for rapid detection of this infectious virus in the environment are urgently needed for public health protection. In this study, we developed a rapid, real-time, sensitive, and label-free SPRi-based biosensor for rapid, sensitive and highly selective detection of adenoviruses. The sensing protocol consists of mixing the sample containing adenovirus with a predetermined concentration of adenovirus antibody. The mixture was filtered to remove the free antibodies from the sample. A secondary antibody, which was specific to the adenovirus antibody, was immobilized onto the SPRi chip surface covalently and the filtrate was flowed over the sensor surface. When the free adenovirus antibodies bound to the surface-immobilized secondary antibodies, we observed this binding via changes in reflectivity. In this approach, a higher amount of adenoviruses resulted in fewer free adenovirus antibodies and thus smaller reflectivity changes. A dose-response curve was generated, and the linear detection range was determined to be from 10 PFU/mL to 5000 PFU/mL with an R(2) value greater than 0.9. The results also showed that the developed biosensing system had a high specificity towards adenovirus (less than 20% signal change when tested in a sample matrix containing rotavirus and lentivirus). Copyright © 2015 Elsevier B.V. All rights reserved.

  5. An Effective and Practical Method for Solving Hydro-Thermal Unit Commitment Problems Based on Lagrangian Relaxation Method

    Science.gov (United States)

    Sakurai, Takayoshi; Kusano, Takashi; Saito, Yutaka; Hirato, Kota; Kato, Masakazu; Murai, Masahiko; Nagata, Junichi

    This paper presents an effective and practical method based on the Lagrangian relaxation method for solving hydro-thermal unit commitment problem in which operational constraints involve spinning reserve requirements for thermal units and prohibition of simultaneous unit start-up/shut-down at the same plant. This method is processed in each iteration step of LRM that enables a direct solution. To improve convergence, this method applies an augmented Lagrangian relaxation method. Its effectiveness demonstrated for a real power system.

  6. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  7. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    Science.gov (United States)

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  8. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    Science.gov (United States)

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  9. Novel ultrasonic distance measuring system based on correlation method

    Directory of Open Access Journals (Sweden)

    Gądek K.

    2014-09-01

    Full Text Available This paper presents an innovative method for measuring the time delay of ultrasonic waves. Pulse methods used in the previous studies was characterized by latency. The method of phase correlation, presented in this article is free from this disadvantages. Due to the phase encoding with the use of Walsh functions the presented method allows to obtain better precision than previous methods. The algorithm to measure delay of the reflected wave with the use of microprocessor ARM Cortex M4 linked to a PC has been worked out and tested. This method uses the signal from the ultrasonic probe to precisely determine the time delay, caused by the propagation in medium, possible. In order to verify the effectiveness of the method a part of the measuring system was implemented in LabVIEW. The presented method proved to be effective, as it is shown in presented simulation results

  10. Overlay design method based on visual pavement distress.

    Science.gov (United States)

    1978-01-01

    A method for designing the thickness of overlays for bituminous concrete pavements in Virginia is described. In this method the thickness is calculated by rating the amount and severity of observed pavement distress and determining the total accumula...

  11. Differential evolution based method for total transfer capability ...

    African Journals Online (AJOL)

    The performance of the proposed method is tested on the modified IEEE-30 bus system and results are compared with that of Particle Swarm Optimization method (PSO). Further, the results are compared with other published results using CPF and EP. It is found that DE provides more reliable results than other methods.

  12. A METHOD BASED ON FUZZY SYSTEM FOR ASSESSING THE RELIABILITY OF SOFTWARE BASED ASPECTS

    Directory of Open Access Journals (Sweden)

    Mohammad Zavvar

    2015-08-01

    Full Text Available In fact, reliability as the qualities metric is the probability success or the probability that a system or a set of tasks will work without failure for a specified constraints of time and space, as specified in the design and operating conditions specified temperature, humidity, vibration and action. A relatively new methodologies for developing complex software systems engineering is an aspect oriented software systems, that provides the new methods for the separation of concerns multiple module configuration or intervention and automatic integration them with a system. In this paper, a method using fuzzy logic to measure software reliability based on the above aspects is presented. The proposed approach regarding the use of appropriate metrics and low errors in the estimation of reliability has a better performance than other methods.

  13. An improved segmentation-based HMM learning method for Condition-based Maintenance

    International Nuclear Information System (INIS)

    Liu, T; Lemeire, J; Cartella, F; Meganck, S

    2012-01-01

    In the domain of condition-based maintenance (CBM), persistence of machine states is a valid assumption. Based on this assumption, we present an improved Hidden Markov Model (HMM) learning algorithm for the assessment of equipment states. By a good estimation of initial parameters, more accurate learning can be achieved than by regular HMM learning methods which start with randomly chosen initial parameters. It is also better in avoiding getting trapped in local maxima. The data is segmented with a change-point analysis method which uses a combination of cumulative sum charts (CUSUM) and bootstrapping techniques. The method determines a confidence level that a state change happens. After the data is segmented, in order to label and combine the segments corresponding to the same states, a clustering technique is used based on a low-pass filter or root mean square (RMS) values of the features. The segments with their labelled hidden state are taken as 'evidence' to estimate the parameters of an HMM. Then, the estimated parameters are served as initial parameters for the traditional Baum-Welch (BW) learning algorithms, which are used to improve the parameters and train the model. Experiments on simulated and real data demonstrate that both performance and convergence speed is improved.

  14. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  15. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  16. Cyclotron operating mode determination based on intelligent methods

    International Nuclear Information System (INIS)

    Ouda, M.M.E.M.

    2011-01-01

    Particle accelerators are generators that produce beams of charged particles with energies depending on the accelerator type. The MGC-20 cyclotron is a cyclic particle accelerator used for accelerating protons, deuterons, alpha particles, and helium-3 to different energies. Main applications are isotopes production, nuclear reactions studies, and mass spectroscopy studies and other industrial applications. The cyclotron is a complicated machine depends on using a strong magnetic field and high frequency-high voltage electric field together to accelerate and bend charged particles inside the accelerating chamber. It consists of the following main parts, the radio frequency system, the main magnet with the auxiliary concentric and harmonic coils, the electrostatic deflector, and the ion source, the beam transport system, and high precision and high stability DC power supplies.To accelerate a particle to certain energy, one has to adjust the cyclotron operating parameters to be suitable to accelerate this particle to that energy. If the cyclotron operating parameters together are adjusted to accelerate a charged particle to certain energy, then these parameters together are named the operating mode to accelerate this particle to that energy. For example the operating mode to accelerate protons to 18 MeV is named the (18 MeV protons operating mode). The operating mode includes many parameters that must be adjusted together to be successful to accelerate, extract, focus, steer a particle from the ion source to the experiment. Due to the big number of parameters in the operating modes, 19 parameters have been selected in this thesis to be used in an intelligent system based on feed forward back propagation neural network to determine the parameters for new operating modes. The new intelligent system depends on the available information about the currently used operating modes.The classic way to determine a new operating mode was depending on trial and error method to

  17. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  18. Validation of internal dosimetry protocols based on stochastic method

    International Nuclear Information System (INIS)

    Mendes, Bruno M.; Fonseca, Telma C.F.; Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R.

    2015-01-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  19. Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.

    Science.gov (United States)

    Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan

    2017-08-08

    Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

  20. Method of predicting Splice Sites based on signal interactions

    Directory of Open Access Journals (Sweden)

    Deogun Jitender S

    2006-04-01

    Full Text Available Abstract Background Predicting and proper ranking of canonical splice sites (SSs is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE and Intronic (ISE Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand.

  1. Validation of internal dosimetry protocols based on stochastic method

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Bruno M.; Fonseca, Telma C.F., E-mail: bmm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R., E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (DEN/UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  2. A novel blink detection method based on pupillometry noise.

    Science.gov (United States)

    Hershman, Ronen; Henik, Avishai; Cohen, Noga

    2018-02-01

    Pupillometry (or the measurement of pupil size) is commonly used as an index of cognitive load and arousal. Pupil size data are recorded using eyetracking devices that provide an output containing pupil size at various points in time. During blinks the eyetracking device loses track of the pupil, resulting in missing values in the output file. The missing-sample time window is preceded and followed by a sharp change in the recorded pupil size, due to the opening and closing of the eyelids. This eyelid signal can create artificial effects if it is not removed from the data. Thus, accurate detection of the onset and the offset of blinks is necessary for pupil size analysis. Although there are several approaches to detecting and removing blinks from the data, most of these approaches do not remove the eyelid signal or can result in a relatively large amount of data loss. The present work suggests a novel blink detection algorithm based on the fluctuations that characterize pupil data. These fluctuations ("noise") result from measurement error produced by the eyetracker device. Our algorithm finds the onset and offset of the blinks on the basis of this fluctuation pattern and its distinctiveness from the eyelid signal. By comparing our algorithm to three other common blink detection methods and to results from two independent human raters, we demonstrate the effectiveness of our algorithm in detecting blink onset and offset. The algorithm's code and example files for processing multiple eye blinks are freely available for download ( https://osf.io/jyz43 ).

  3. RESEARCH ON KNOWLEDGE-BASED OPTIMIZATION METHOD OF INDOOR LOCATION BASED ON LOW ENERGY BLUETOOTH

    Directory of Open Access Journals (Sweden)

    C. Li

    2017-09-01

    Full Text Available With the rapid development of LBS (Location-based Service, the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1 The establishment and application of a priori and posterior knowledge base. 2 Primary selection of signal source. 3 Elimination of positioning gross error. 4 Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  4. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    Science.gov (United States)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  5. Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael

    We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...

  6. Ground-based ULF methods of monitoring the magnetospheric plasma

    Science.gov (United States)

    Romanova, Natalia; Pilipenko, Viacheslav; Stepanova, Marina; Kozyreva, Olga; Kawano, Hideaki

    The terrestrial magnetosphere is a giant natural MHD resonator. The magnetospheric Alfven resonator is formed by the geomagnetic field lines terminated by the conductive ionospheres. Though a source of Pc3-5 waves is not reliably known, the identification of resonant frequency enables one to determine the magnetospheric plasma density and ionospheric conductance from ground magnetometer observations. However, a spectral peak does not necessarily correspond to a local resonant frequency, and the width of a spectral peak cannot be directly used to determine the quality factor of the magnetospheric resonator. This ambiguity can be resolved with the help of various gradient and polarization methods, reviewed in this presentation: Gradient method (GM), Amplitude-Phase Gradient method (APGM),Polarization methods (including H/D method), and Hodograph (H) method. These methods can be regarded as tools for the "hydromagnetic spectroscopy“ to diagnose the magnetosphere. The H-method has additional possibilities as compared with the gradient method: one can determine continuous distribution of the magnetospheric resonant frequencies and Q-factors in the range of latitudes beyond the observation baseline. These methods are illustrated by results of their application to the SAMBA magnetometers array data.

  7. Development of 3-D FBR heterogeneous core calculation method based on characteristics method

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Maruyama, Manabu; Hamada, Yuzuru; Nishi, Hiroshi; Ishibashi, Junichi; Kitano, Akihiro

    2002-01-01

    A new 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed by combining the characteristics method and the nodal transport method. In the axial direction the nodal transport method is applied, and the characteristics method is applied to take into account the radial heterogeneity of fuel assemblies. The numerical calculations have been performed to verify 2-D radial calculations of FBR assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region

  8. Neutron detection methods based on fission fragment track counting

    International Nuclear Information System (INIS)

    Posta, S.

    2004-12-01

    The report deals with the development and application of a simple computer-controlled videomicroscope for fission fragment track counting. The principle of the SSFTD (Solid State Fission Track Detector) method, development of image processing methods, and track counting principles are outlined. Focus is on the application of the method to neutron flux density and fluence determination in reactor dosimetry. The procedures developed were applied to neutron parameter measurements in a VVER-1000 mockup in the LR-0 research reactor. (author)

  9. A review of formal orthogonality in Lanczos-based methods

    Science.gov (United States)

    Brezinski, C.; Zaglia, M. Redivo; Sadok, H.

    2002-03-01

    Krylov subspace methods and their variants are presently the favorite iterative methods for solving a system of linear equations. Although it is a purely linear algebra problem, it can be tackled by the theory of formal orthogonal polynomials. This theory helps to understand the origin of the algorithms for the implementation of Krylov subspace methods and, moreover, the use of formal orthogonal polynomials brings a major simplification in the treatment of some numerical problems related to these algorithms. This paper reviews this approach in the case of Lanczos method and its variants, the novelty being the introduction of a preconditioner.

  10. Fast subcellular localization by cascaded fusion of signal-based and homology-based methods

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2011-10-01

    Full Text Available Abstract Background The functions of proteins are closely related to their subcellular locations. In the post-genomics era, the amount of gene and protein data grows exponentially, which necessitates the prediction of subcellular localization by computational means. Results This paper proposes mitigating the computation burden of alignment-based approaches to subcellular localization prediction by a cascaded fusion of cleavage site prediction and profile alignment. Specifically, the informative segments of protein sequences are identified by a cleavage site predictor using the information in their N-terminal shorting signals. Then, the sequences are truncated at the cleavage site positions, and the shortened sequences are passed to PSI-BLAST for computing their profiles. Subcellular localization are subsequently predicted by a profile-to-profile alignment support-vector-machine (SVM classifier. To further reduce the training and recognition time of the classifier, the SVM classifier is replaced by a new kernel method based on the perturbational discriminant analysis (PDA. Conclusions Experimental results on a new dataset based on Swiss-Prot Release 57.5 show that the method can make use of the best property of signal- and homology-based approaches and can attain an accuracy comparable to that achieved by using full-length sequences. Analysis of profile-alignment score matrices suggest that both profile creation time and profile alignment time can be reduced without significant reduction in subcellular localization accuracy. It was found that PDA enjoys a short training time as compared to the conventional SVM. We advocate that the method will be important for biologists to conduct large-scale protein annotation or for bioinformaticians to perform preliminary investigations on new algorithms that involve pairwise alignments.

  11. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  12. Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling

    Energy Technology Data Exchange (ETDEWEB)

    Randall S. Seright

    2007-09-30

    This final technical progress report describes work performed from October 1, 2004, through May 16, 2007, for the project, 'Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling'. We explored the potential of pore-filling gels for reducing excess water production from both fractured and unfractured production wells. Several gel formulations were identified that met the requirements--i.e., providing water residual resistance factors greater than 2,000 and ultimate oil residual resistance factors (F{sub rro}) of 2 or less. Significant oil throughput was required to achieve low F{sub rro} values, suggesting that gelant penetration into porous rock must be small (a few feet or less) for existing pore-filling gels to provide effective disproportionate permeability reduction. Compared with adsorbed polymers and weak gels, strong pore-filling gels can provide greater reliability and behavior that is insensitive to the initial rock permeability. Guidance is provided on where relative-permeability-modification/disproportionate-permeability-reduction treatments can be successfully applied for use in either oil or gas production wells. When properly designed and executed, these treatments can be successfully applied to a limited range of oilfield excessive-water-production problems. We examined whether gel rheology can explain behavior during extrusion through fractures. The rheology behavior of the gels tested showed a strong parallel to the results obtained from previous gel extrusion experiments. However, for a given aperture (fracture width or plate-plate separation), the pressure gradients measured during the gel extrusion experiments were much higher than anticipated from rheology measurements. Extensive experiments established that wall slip and first normal stress difference were not responsible for the pressure gradient discrepancy. To explain the discrepancy, we noted that the aperture for gel flow (for mobile gel wormholing through concentrated

  13. Study of biomaterials by ion-beam based methods

    International Nuclear Information System (INIS)

    Racolta, Petru; Craciun, Liviu; Cincu, Emanuela; Voiculescu, Dana; Muresan, Ofelia; Serban, Alin; Filip, Andrei Ilie; Bunea, Danil; Antoniac, Vasile; Tudor, Tiberiu Laurian; Visan, Teodor; Visan, Sanda; Ibris, Neluta

    2002-01-01

    The extension lifetime of prosthetic devices, dental materials and orthodontic devices is one main goal of the international medical supply community. In the frame of an interdisciplinary national project, IFIN-HH has started experimentation on some alternative procedures to study the wear/corrosion phenomena of biological materials by using ion-beam based techniques. Since joint prostheses are mechanical bearings there are concerns over friction and wear just as there are with any bearing. These concerns date back to the early introduction of total hip prostheses and were shown to be justified by the early failures due to wear. Subsequently, changes in materials and designs reduced the incidence of wear failure to a low level at which failures due to other mechanisms became dominant. Interest turned to preventing femoral component fracture, reducing the rates of infection, and reducing the rates of loosening. Attention to wear as a mechanism of failure has recently increased. The failure rate for joint replacement at the hip or knee has been progressively reduced. The biologic effects of wear debris have been recognized; wearing out of the prosthesis is no longer a prerequisite for an adverse outcome. There is an active search for new materials with increased wear resistance. In the case of metallic component from hip, knee prostheses and dental alloys, we present the optimum nuclear reactions according with the main parameters of our U-120 Cyclotron (p, d, E max = 13 MeV and α particle, E max = 26 MeV). In the case of polymers, one of an articulating couple of the prosthetic devices, direct activation causes severe changes in its surface morphology and its structure (formation of defects and free radicals). We have developed an indirect activation mode using the principle of recoil ion implantation, applied to 56 Co radioactive ions generated by proton particle beams on a Fe target (thickness ∼ 10 mm). A thin target of elementary composition A is bombarded by

  14. Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes

    International Nuclear Information System (INIS)

    Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.

    2005-01-01

    An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)

  15. Iris Recognition Method Based on Natural Open Eyes | Latha ...

    African Journals Online (AJOL)

    ... makes codes to feature points and figures the iris pattern by iris codes. Finally, sorts the different iris patterns by auto accommodated pattern matching method and gives the recognition results. Many experiments show the recognition rates of this method can reach 99.687% that can meet the demand of iris recognition.

  16. An efficient planar inverse acoustic method based on Toeplitz matrices

    NARCIS (Netherlands)

    Wind, Jelmer; de Boer, Andries; Ellenbroek, Marcellinus Hermannus Maria

    2011-01-01

    This article proposes a new, fast method to solve inverse acoustic problems for planar sources. This problem is often encountered in practice and methods such as planar nearfield acoustic holography (PNAH) and statistically optimised nearfield acoustic holography (SONAH) are widely used to solve it.

  17. Optimal layout of radiological environment monitoring based on TOPSIS method

    International Nuclear Information System (INIS)

    Li Sufen; Zhou Chunlin

    2006-01-01

    TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)

  18. Integrative health care method based on combined complementary ...

    African Journals Online (AJOL)

    The article presents a systemic approach to health care with complementary medicines such as rehabilitative acupuncture, homeopathy and chiropractic through the application of a method of holistic care and integrated approach. Materials and Methods: There was a participatory action research in January 2012 to January ...

  19. Mathematical foundation of the optimization-based fluid animation method

    DEFF Research Database (Denmark)

    Erleben, Kenny; Misztal, Marek Krzysztof; Bærentzen, Jakob Andreas

    2011-01-01

    We present the mathematical foundation of a fluid animation method for unstructured meshes. Key contributions not previously treated are the extension to include diffusion forces and higher order terms of non-linear force approximations. In our discretization we apply a fractional step method...

  20. A calibration method for PLLs based on transient response

    DEFF Research Database (Denmark)

    Cassia, Marco; Shah, Peter Jivan; Bruun, Erik

    2004-01-01

    A novel method to calibrate the frequency response of a Phase-Locked Loop is presented. The method requires just an additional digital counter and an auxiliary Phase-Frequency Detector (PFD) to measure the natural frequency of the PLL. The measured value can be used to tune the PLL response...

  1. Computation Method Comparison for Th Based Seed-Blanket Cores

    International Nuclear Information System (INIS)

    Kolesnikov, S.; Galperin, A.; Shwageraus, E.

    2004-01-01

    This work compares two methods for calculating a given nuclear fuel cycle in the WASB configuration. Both methods use the ELCOS Code System (2-D transport code BOXER and 3-D nodal code SILWER) [4] are compared. In the first method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated separately for each region by the 2-D transport code. In the second method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated from Seed-Blanket Colorsets (Fig.1) calculated by the 2-D transport code. The evaluation of the error introduced by the first method is the main objective of the present study

  2. A Novel Global Path Planning Method for Mobile Robots Based on Teaching-Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Zongsheng Wu

    2016-07-01

    Full Text Available The Teaching-Learning-Based Optimization (TLBO algorithm has been proposed in recent years. It is a new swarm intelligence optimization algorithm simulating the teaching-learning phenomenon of a classroom. In this paper, a novel global path planning method for mobile robots is presented, which is based on an improved TLBO algorithm called Nonlinear Inertia Weighted Teaching-Learning-Based Optimization (NIWTLBO algorithm in our previous work. Firstly, the NIWTLBO algorithm is introduced. Then, a new map model of the path between start-point and goal-point is built by coordinate system transformation. Lastly, utilizing the NIWTLBO algorithm, the objective function of the path is optimized; thus, a global optimal path is obtained. The simulation experiment results show that the proposed method has a faster convergence rate and higher accuracy in searching for the path than the basic TLBO and some other algorithms as well, and it can effectively solve the optimization problem for mobile robot global path planning.

  3. Pressure-based impact method to count bedload particles

    Science.gov (United States)

    Antico, Federica; Mendes, Luís; Aleixo, Rui; Ferreira, Rui M. L.

    2017-04-01

    -channel flow, was analysed. All tests featured a period of 90 s data collection. For a detailed description of the laboratory facilities and test conditions see Mendes et al. (2016). Results from MiCas system were compared with those of obtained from the analysis of a high-speed video footage. The obtained results shown a good agreement between both techniques. The measurements carried out allowed to determine that MiCas system is able to track particle impact in real-time within an error margin of 2.0%. From different tests with the same conditions it was possible to determine the repeatability of MiCas system. Derived quantities such as bedload transport rates, eulerian auto-correlation functions and structure functions are also in close agreement with measurements based on optical methods. The main advantages of MiCas system relatively to digital image processing methods are: a) independence from optical access, thus avoiding problems with light intensity variations and oscillating free surfaces; b) small volume of data associated to particle counting, which allows for the possibility of acquiring very long data series (hours, days) of particle impacts. In the considered cases, it would take more than two hours to generate 1 MB of data. For the current validation tests, 90 s acquisition time generated 25 Gb of images but 11 kB of MiCas data. On the other hand the time necessary to process the digital images may correspond to days, effectively limiting its usage to small time series. c) the possibility of real-time measurements, allowing for detection of problems during the experiments and minimizing some post-processing steps. This research was partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT). References Mendes L., Antico F., Sanches P., Alegria F., Aleixo R., and Ferreira RML. (2016). A particle counting system for

  4. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  5. Identification Method of Mud Shale Fractures Base on Wavelet Transform

    Science.gov (United States)

    Xia, Weixu; Lai, Fuqiang; Luo, Han

    2018-01-01

    In recent years, inspired by seismic analysis technology, a new method for analysing mud shale fractures oil and gas reservoirs by logging properties has emerged. By extracting the high frequency attribute of the wavelet transform in the logging attribute, the formation information hidden in the logging signal is extracted, identified the fractures that are not recognized by conventional logging and in the identified fracture segment to show the “cycle jump”, “high value”, “spike” and other response effect is more obvious. Finally formed a complete wavelet denoising method and wavelet high frequency identification fracture method.

  6. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  7. [Comparison of sustainable development status in Heilongjiang Province based on traditional ecological footprint method and emergy ecological footprint method].

    Science.gov (United States)

    Chen, Chun-feng; Wang, Hong-yan; Xiao, Du-ning; Wang, Da-qing

    2008-11-01

    By using traditional ecological footprint method and its modification, emergy ecological footprint method, the sustainable development status of Heilongjiang Province in 2005 was analyzed. The results showed that the ecological deficits of Heilongjiang Province in 2005 based on emergy and conventional ecological footprint methods were 1.919 and 0.6256 hm2 x cap(-1), respectively. The ecological footprint value based on the two methods both exceeded its carrying capacity, which indicated that the social and economic development of the study area was not sustainable. Emergy ecological footprint method was used to discuss the relationships between human's material demand and ecosystem resources supply, and more stable parameters such as emergy transformity and emergy density were introduced into emergy ecological footprint method, which overcame some of the shortcomings of conventional ecological method.

  8. A Generalized Demodulation and Hilbert Transform Based Signal Decomposition Method

    Directory of Open Access Journals (Sweden)

    Zhi-Xiang Hu

    2017-01-01

    Full Text Available This paper proposes a new signal decomposition method that aims to decompose a multicomponent signal into monocomponent signal. The main procedure is to extract the components with frequencies higher than a given bisecting frequency by three steps: (1 the generalized demodulation is used to project the components with lower frequencies onto negative frequency domain, (2 the Hilbert transform is performed to eliminate the negative frequency components, and (3 the inverse generalized demodulation is used to obtain the signal which contains components with higher frequencies only. By running the procedure recursively, all monocomponent signals can be extracted efficiently. A comprehensive derivation of the decomposition method is provided. The validity of the proposed method has been demonstrated by extensive numerical analysis. The proposed method is also applied to decompose the dynamic strain signal of a cable-stayed bridge and the echolocation signal of a bat.

  9. A GPS-Based Decentralized Control Method for Islanded Microgrids

    DEFF Research Database (Denmark)

    Golsorkhi, Mohammad; Lu, Dylan; Guerrero, Josep M.

    2017-01-01

    Coordinated control of distributed energy resources (DER) is essential for the operation of islanded microgrids (MGs). Conventionally, such coordination is achieved by drooping the frequency of the reference voltage versus active (or reactive) power. The conventional droop method ensures...... synchronized operation and even power sharing without any communication link. However, that method produces unwanted frequency fluctuations, which degrade the power quality. In order to improve the power quality of islanded MGs, a novel decentralized control method is proposed in this paper. In this method......, GPS timing technology is utilized to synchronize the DERs to a common reference frame, rotating at nominal frequency. In addition, an adaptive Q-f droop controller is introduced as a backup to ensure stable operation during GPS signal interruptions. In the context of the common reference frame, even...

  10. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  11. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    Science.gov (United States)

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  12. Reconstruction of Banknote Fragments Based on Keypoint Matching Method.

    Science.gov (United States)

    Gwo, Chih-Ying; Wei, Chia-Hung; Li, Yue; Chiu, Nan-Hsing

    2015-07-01

    Banknotes may be shredded by a scrap machine, ripped up by hand, or damaged in accidents. This study proposes an image registration method for reconstruction of multiple sheets of banknotes. The proposed method first constructs different scale spaces to identify keypoints in the underlying banknote fragments. Next, the features of those keypoints are extracted to represent their local patterns around keypoints. Then, similarity is computed to find the keypoint pairs between the fragment and the reference banknote. The banknote fragments can determine the coordinate and amend the orientation. Finally, an assembly strategy is proposed to piece multiple sheets of banknote fragments together. Experimental results show that the proposed method causes, on average, a deviation of 0.12457 ± 0.12810° for each fragment while the SIFT method deviates 1.16893 ± 2.35254° on average. The proposed method not only reconstructs the banknotes but also decreases the computing cost. Furthermore, the proposed method can estimate relatively precisely the orientation of the banknote fragments to assemble. © 2015 American Academy of Forensic Sciences.

  13. MAKING DESITIONS ON THE BASE OF ANALYSIS HIERARHY METHOD

    Directory of Open Access Journals (Sweden)

    ERSHOVA N. M.

    2015-09-01

    Full Text Available Problem statement. The method of analysisof hierarchy process (AHP provides solution of multi-criteria problems with simple and reasonable, including quantitative and qualitative factors of different dimensions. AHP is used to solve semistructured and unstructured problems. Only now it begins to be applied in Ukraine. The first works [2, 10, 14] were appered, where the essence of the method is revealed and technology of implementation of it on the computer are shown. [14] An attempt to determine theoretically the eigenvalues to the back of a symmetric matrix, but as a result of wrongly an accepted fact, the sum of the eigenvalues of matrix is equal to its order n, the authors conclude that for perfectly coherented matrix "all of eigenvalues - zeros, except for one, equaled n " In fact, the amount of the eigenvalues numerals of matrix A equals to the sum of the diagonal elements of the matrix, i.e.to its trace Sp A [5]. The shown [10] technology of implementation of method in this wark in the Excel indicates that the authors do not own of matrix functions of master of functions. There is no clear method of calculation using the AHP in the literature. Purpose. To develop a methodology for the application of the AHP to solve unstructured problems and technology of implementation method of Excel. Conclusion The proposed method opens the possibility of AHP and quite simply realized in Excel using of the matrix functions of master of functions.

  14. On the synchronization of a class of chaotic systems based on backstepping method

    International Nuclear Information System (INIS)

    Wang Bo; Wen Guangjun

    2007-01-01

    This Letter focuses on the synchronization problem of a class of chaotic systems. A synchronization method is presented based on Lyapunov method and backstepping method. Finally some typical numerical examples are given to show the effectiveness of the theoretical results

  15. An entropy-based improved k-top scoring pairs (TSP) method for ...

    African Journals Online (AJOL)

    An entropy-based improved k-top scoring pairs (TSP) (Ik-TSP) method was presented in this study for the classification and prediction of human cancers based on gene-expression data. We compared Ik-TSP classifiers with 5 different machine learning methods and the k-TSP method based on 3 different feature selection ...

  16. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  17. Network Forensics Method Based on Evidence Graph and Vulnerability Reasoning

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2016-11-01

    Full Text Available As the Internet becomes larger in scale, more complex in structure and more diversified in traffic, the number of crimes that utilize computer technologies is also increasing at a phenomenal rate. To react to the increasing number of computer crimes, the field of computer and network forensics has emerged. The general purpose of network forensics is to find malicious users or activities by gathering and dissecting firm evidences about computer crimes, e.g., hacking. However, due to the large volume of Internet traffic, not all the traffic captured and analyzed is valuable for investigation or confirmation. After analyzing some existing network forensics methods to identify common shortcomings, we propose in this paper a new network forensics method that uses a combination of network vulnerability and network evidence graph. In our proposed method, we use vulnerability evidence and reasoning algorithm to reconstruct attack scenarios and then backtrack the network packets to find the original evidences. Our proposed method can reconstruct attack scenarios effectively and then identify multi-staged attacks through evidential reasoning. Results of experiments show that the evidence graph constructed using our method is more complete and credible while possessing the reasoning capability.

  18. Seamless Method- and Model-based Software and Systems Engineering

    Science.gov (United States)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  19. Robust and Adaptive Block Tracking Method Based on Particle Filter

    Directory of Open Access Journals (Sweden)

    Bin Sun

    2015-10-01

    Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.

  20. Simple Radiowave-Based Method for Measuring Peripheral Blood Flow

    Data.gov (United States)

    National Aeronautics and Space Administration — Project objective is to design small radio frequency based flow probes for the measurement of blood flow velocity in peripheral arteries such as the femoral artery...

  1. Visual tracking method based on cuckoo search algorithm

    Science.gov (United States)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  2. Silicon-Based Anode and Method for Manufacturing the Same

    Science.gov (United States)

    Yushin, Gleb Nikolayevich (Inventor); Luzinov, Igor (Inventor); Zdyrko, Bogdan (Inventor); Magasinski, Alexandre (Inventor)

    2017-01-01

    A silicon-based anode comprising silicon, a carbon coating that coats the surface of the silicon, a polyvinyl acid that binds to at least a portion of the silicon, and vinylene carbonate that seals the interface between the silicon and the polyvinyl acid. Because of its properties, polyvinyl acid binders offer improved anode stability, tunable properties, and many other attractive attributes for silicon-based anodes, which enable the anode to withstand silicon cycles of expansion and contraction during charging and discharging.

  3. An Emotion-Based Method to Perform Algorithmic Composition

    OpenAIRE

    Huang, Chih-Fang; Lin, En-Ju

    2013-01-01

    The generative music using algorithmic composition techniques has been developed in many years. However it usually lacks of emotion-based mechanism to generate music with specific affective features. In this article the automated music algorithm will be performed based on Prof. Phil Winosr’s “MusicSculptor” software with proper emotion parameter mapping to drive the music content with specific context using various music pa-rameters distribution with different probability control, in order to...

  4. Development of a biosensor based on a prism coupler method

    Science.gov (United States)

    Couvignou, Stephen; Trudel, Luc; Lessard, Roger A.; Brisson, Louise

    2004-09-01

    Agar is a complex mixture of polysaccharides. It is widely used in microbiology as a jellifying agent to obtain solid culture medium that can then support the growth of bacteria, yeasts and other microorganisms. Although this substrate has a daily use in biology and microbiology, its physico-chemical properties are still not well known. Consequently, a characterisation of a solid bacteriological medium infected or not by Escherichia coli type bacteria can be realized on thin films obtained by spin coating. The present study using a prism coupler method gives a sensitive tool to detect photo-physical change in the medium. This method could allow us to detect the presence of bacteria. Results obtained are reproducible on different samples and might be qualitatively compared with conventional imagery. This method should enable a rapid detection of the growing of bacteria on the studied medium.

  5. METHODICAL BASES OF ESTIMATION GLOMERULAR FILTRATION RATE IN UROLOGICAL PRACTICE

    Directory of Open Access Journals (Sweden)

    M. M. Batiushin

    2017-01-01

    Full Text Available The article presents a review of methodological issues of estimation of glomerular filtration rate in urologic practice. Author examine the current international and national recommendations, in particular by KDIGO, the recommendations of the scientific society of nephrologists of Russia, Association of urologists of Russia, the results of comparative analysis of different methods of assessing glomerular filtration rate. It is shown that the currently calculated methods of assessment of glomerular filtration rate have advantages over technique of clearance. The advantages and disadvantages of methods for calculating glomerular filtration rate by the formula of Cockcroft-Gault and MDRD. The author lists the pathological conditions in urological practice, in which there is a need to assess glomerular filtration rate, given nomograms and links to online calculators for quick and easy calculation of glomerular filtration rate.

  6. Statistical treatment of experimental data: bases and elementary methods

    International Nuclear Information System (INIS)

    Zuppiroli, Libero.

    1976-12-01

    The most elementary statistical methods for experimental data treatment, in current use in research laboratories are examined from a critical point of view. An introductive discussion of some basic concepts of probability theory is presented. A few estimation methods are examined and applied to error calculations. The fit of a curve to experimental data is also an estimation problem examined. A few examples are shown to point out the dangers of a misuse of the mean squares method. Successions of events randomly distributed in time, or sets of points randomly distributed in space (Poisson processes) are also studied. The results of this study are used to solve problems related with the calculations of errors in counting [fr

  7. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  8. Identifying Method of Drunk Driving Based on Driving Behavior

    Directory of Open Access Journals (Sweden)

    Xiaohua Zhao

    2011-05-01

    Full Text Available Drunk driving is one of the leading causes contributing to traffic crashes. There are numerous issues that need to be resolved with the current method of identifying drunk driving. Driving behavior, with the characteristic of real-time, was extensively researched to identify impaired driving behaviors. In this paper, the drives with BACs above 0.05% were defined as drunk driving state. A detailed comparison was made between normal driving and drunk driving. The experiment in driving simulator was designed to collect the driving performance data of the groups. According to the characteristics analysis for the effect of alcohol on driving performance, seven significant indicators were extracted and the drunk driving was identified by the Fisher Discriminant Method. The discriminant function demonstrated a high accuracy of classification. The optimal critical score to differentiate normal from drinking state was found to be 0. The evaluation result verifies the accuracy of classification method.

  9. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  10. Segmentation of MRI Volume Data Based on Clustering Method

    Directory of Open Access Journals (Sweden)

    Ji Dongsheng

    2016-01-01

    Full Text Available Here we analyze the difficulties of segmentation without tag line of left ventricle MR images, and propose an algorithm for automatic segmentation of left ventricle (LV internal and external profiles. Herein, we propose an Incomplete K-means and Category Optimization (IKCO method. Initially, using Hough transformation to automatically locate initial contour of the LV, the algorithm uses a simple approach to complete data subsampling and initial center determination. Next, according to the clustering rules, the proposed algorithm finishes MR image segmentation. Finally, the algorithm uses a category optimization method to improve segmentation results. Experiments show that the algorithm provides good segmentation results.

  11. High-Throughput Sequencing Based Methods of RNA Structure Investigation

    DEFF Research Database (Denmark)

    Kielpinski, Lukasz Jan

    In this thesis we describe the development of four related methods for RNA structure probing that utilize massive parallel sequencing. Using them, we were able to gather structural data for multiple, long molecules simultaneously. First, we have established an easy to follow experimental and comp......In this thesis we describe the development of four related methods for RNA structure probing that utilize massive parallel sequencing. Using them, we were able to gather structural data for multiple, long molecules simultaneously. First, we have established an easy to follow experimental...... with known priming sites....

  12. Kernel method for clustering based on optimal target vector

    International Nuclear Information System (INIS)

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  13. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    Science.gov (United States)

    Tickner

    2000-10-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal.

  14. Experiences with a compound method for estimating the time since death. II. Integration of non-temperature-based methods.

    Science.gov (United States)

    Henssge, C; Althaus, L; Bolt, J; Freislederer, A; Haffner, H T; Henssge, C A; Hoppe, B; Schneider, V

    2000-01-01

    The period since death was estimated at the scene in 72 consecutive cases using the temperature-based nomogram method as the primary method and supplemented by examination of criteria such as lividity, rigor mortis, mechanical and electrical excitability of skeletal muscle and chemical excitability of the iris. A case-oriented, computer-assisted selection of the non-temperature-based methods and integration of the results into a common result of the compound method was made following a special logistic. The limits of the period since death as estimated by the nomogram were improved in 49 cases by including the non-temperature-based methods and also provided results in 4 cases where the temperature method could not be used. In a further 6 cases the non-temperature-based methods confirmed the limits estimated by the temperature method but in 14 cases a useful result could not be obtained. In only one of the cases investigated was the upper limit of the period since death, as estimated by the criterion re-establishment of rigor (8 h post-mortem), in contradiction with the period determined by the police investigations (9.4 h post-mortem).

  15. Comparison of Gas Dehydration Methods based on Energy ...

    African Journals Online (AJOL)

    PROF HORSFALL

    Hydrocarbon Engineering 5: 71-74. Kumar, S (1987) Gas Production Engineering. Houston: Gulf Professional Publishing 239. NET4GAS (2011) Gas quality parameters. Available at:http://extranet.transgas.cz/caloricity_spec.aspx . Netusil, M. and Ditl, P. (2010). Comparison of. Methods for Dehydration of Natural Gas Stored.

  16. Prediction of epitopes using neural network based methods

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2011-01-01

    In this paper, we describe the methodologies behind three different aspects of the NetMHC family for prediction of MHC class I binding, mainly to HLAs. We have updated the prediction servers, NetMHC-3.2, NetMHCpan-2.2, and a new consensus method, NetMHCcons, which, in their previous versions, hav...

  17. Office-based sperm concentration: A simplified method for ...

    African Journals Online (AJOL)

    Methods: Semen samples from 51 sperm donors were used. Following swim-up separation, the sperm concentration of the retrieved motile fraction was counted, as well as progressive motile sperm using a standardised wet preparation. The number of sperm in a 10 μL droplet covered with a 22 × 22 mm coverslip was ...

  18. Biometric and Emotion Identification: An ECG Compression Based Method

    Directory of Open Access Journals (Sweden)

    Susana Brás

    2018-04-01

    Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  19. Biometric and Emotion Identification: An ECG Compression Based Method.

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  20. A Sector Capacity Assessment Method Based on Airspace Utilization Efficiency

    Science.gov (United States)

    Zhang, Jianping; Zhang, Ping; Li, Zhen; Zou, Xiang

    2018-02-01

    Sector capacity is one of the core factors affecting the safety and the efficiency of the air traffic system. Most of previous sector capacity assessment methods only considered the air traffic controller’s (ATCO’s) workload. These methods are not only limited which only concern about the safety, but also not accurate enough. In this paper, we employ the integrated quantitative index system proposed in one of our previous literatures. We use the principal component analysis (PCA) to find out the principal indicators among the indicators so as to calculate the airspace utilization efficiency. In addition, we use a series of fitting functions to test and define the correlation between the dense of air traffic flow and the airspace utilization efficiency. The sector capacity is then decided as the value of the dense of air traffic flow corresponding to the maximum airspace utilization efficiency. We also use the same series of fitting functions to test the correlation between the dese of air traffic flow and the ATCOs’ workload. We examine our method with a large amount of empirical operating data of Chengdu Controlling Center and obtain a reliable sector capacity value. Experiment results also show superiority of our method against those only consider the ATCO’s workload in terms of better correlation between the airspace utilization efficiency and the dense of air traffic flow.

  1. Novel curcumin–based method of fluoride determination – the ...

    African Journals Online (AJOL)

    Fluoride concentrations above and below certain limits are deleterious to human health, having been implicated in diseases such as fluorosis, arthritis, caries, cancers etc. Better methods of its qualitative and quantitative determination are therefore necessary. A novel, less toxic and more environmentally friendly analytical ...

  2. Early diagnostic method for sepsis based on neutrophil MR imaging

    Directory of Open Access Journals (Sweden)

    Shanhua Han

    2015-06-01

    Conclusion: Mouse and human neutrophils could be more effectively labelled by Mannan-coated SPION in vitro than Feridex. Sepsis analog neutrophils labelled by Mannan-coated SPIONs could be efficiently detected on MR images, which may serve as an early diagnostic method for sepsis.

  3. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  4. Objective, Way and Method of Faculty Management Based on Ergonomics

    Science.gov (United States)

    WANG, Hong-bin; Liu, Yu-hua

    2008-01-01

    The core problem that influences educational quality of talents in colleges and universities is the faculty management. Without advanced faculty, it is difficult to cultivate excellent talents. With regard to some problems in present faculty construction of colleges and universities, this paper puts forward the new objectives, ways and methods of…

  5. Fringe image analysis based on the amplitude modulation method.

    Science.gov (United States)

    Gai, Shaoyan; Da, Feipeng

    2010-05-10

    A novel phase-analysis method is proposed. To get the fringe order of a fringe image, the amplitude-modulation fringe pattern is carried out, which is combined with the phase-shift method. The primary phase value is obtained by a phase-shift algorithm, and the fringe-order information is encoded in the amplitude-modulation fringe pattern. Different from other methods, the amplitude-modulation fringe identifies the fringe order by the amplitude of the fringe pattern. In an amplitude-modulation fringe pattern, each fringe has its own amplitude; thus, the order information is integrated in one fringe pattern, and the absolute fringe phase can be calculated correctly and quickly with the amplitude-modulation fringe image. The detailed algorithm is given, and the error analysis of this method is also discussed. Experimental results are presented by a full-field shape measurement system where the data has been processed using the proposed algorithm. (c) 2010 Optical Society of America.

  6. Piezoelectric Accelerometers Modification Based on the Finite Element Method

    DEFF Research Database (Denmark)

    Liu, Bin; Kriegbaum, B.

    2000-01-01

    The paper describes the modification of piezoelectric accelerometers using a Finite Element (FE) method. Brüel & Kjær Accelerometer Type 8325 is chosen as an example to illustrate the advanced accelerometer development procedure. The deviation between the measurement and FE simulation results...

  7. A Hybrid Positioning Method Based on Hypothesis Testing

    DEFF Research Database (Denmark)

    Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed

    2012-01-01

    We consider positioning in the scenario where only two reliable range estimates, and few less reliable power observations are available. Such situations are difficult to handle with numerical maximum likelihood methods which require a very accurate initialization to avoid being stuck into local...

  8. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    ML); peak signal-to-noise ratio (PSNR). 1. Introduction. Image denoising is one of the major research topics in image processing. An efficient image denoising method is that in which a compromise has to be found between the noise reduction.

  9. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    with MATLAB Simulink Power System Toolbox. The simulation study results of this novel technique compared to other similar methods are found quite satisfactory by assuring good filtering characteristics and high system stability. Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic ...

  10. Solar spectra analysis based on the statistical moment method

    Czech Academy of Sciences Publication Activity Database

    Druckmüller, M.; Klvaňa, Miroslav; Druckmüllerová, Z.

    2007-01-01

    Roč. 31, č. 1 (2007), s. 297-307 ISSN 1845-8319. [Dynamical processes in the solar atmosphere. Hvar, 24.09.2006-29.09.2006] R&D Projects: GA ČR GA205/04/2129 Institutional research plan: CEZ:AV0Z10030501 Keywords : spectral analysis * method Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  11. New device based on the super spatial resolution (SSR) method

    International Nuclear Information System (INIS)

    Soluri, A.; Atzeni, G.; Ucci, A.; Bellone, T.; Cusanno, F.; Rodilossi, G.; Massari, R.

    2013-01-01

    Recently it have been described that innovative methods, namely Super Spatial Resolution (SSR), can be used to improve the scintigraphic imaging. The aim of SSR techniques is the enhancement of the resolution of an imaging system, using information from several images. In this paper we describe a new experimental apparatus that could be used for molecular imaging and small animal imaging. In fact we present a new device, completely automated, that uses the SSR method and provides images with better spatial resolution in comparison to the original resolution. Preliminary small animal imaging studies confirm the feasibility of a very high resolution system in scintigraphic imaging and the possibility to have gamma cameras using the SSR method, to perform the applications on functional imaging. -- Highlights: • Super spatial resolution brings a high resolution image from scintigraphic images. • Resolution improvement depends on the signal to noise ratio of the original images. • The SSR shows significant improvement on spatial resolution in scintigraphic images. • The SSR method is potentially utilizable for all scintigraphic devices

  12. New method of contour-based mask-shape compiler

    Science.gov (United States)

    Matsuoka, Ryoichi; Sugiyama, Akiyuki; Onizawa, Akira; Sato, Hidetoshi; Toyoda, Yasutaka

    2007-10-01

    We have developed a new method of accurately profiling a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, it is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method for a DFM solution in which two dimensional data are extracted for an error free practical simulation by precise reproduction of a real mask shape in addition to the mask data simulation. The flow centering around the design data is fully automated and provides an environment where optimization and verification for fully automated model calibration with much less error is available. It also allows complete consolidation of input and output functions with an EDA system by constructing a design data oriented system structure. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.

  13. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    Science.gov (United States)

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  14. Assessment of Cultivation Method for Energy Beet Based on LCA Method

    OpenAIRE

    Zhang, Chunfeng; Liu, Feng; Zu, Yuangang; Meng, Qingying; Zhu, Baoguo; Wang, Nannan

    2014-01-01

    In order to establish a supply system for energy resource coupled with the environment, the production technology of sugar beets was explored as a biological energy source. The low-humic andosol as the experimental soil, the panting method was direct planting, and cultivation technique was minimum tillage direct planting method. The control was conventional tillage transplant and no tillage direct planting. The results demonstrated that data revealed that the energy cost of no tillage and a d...

  15. Method for secure electronic voting system: face recognition based approach

    Science.gov (United States)

    Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran

    2017-06-01

    In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.

  16. Application of risk-based inspection methods for cryogenic equipment

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Risk-based Inspection (RBI) is widely applied across the world as part of Pressure Equipment Integrity Management, especially in the oil and gas industry, to generally reduce costs compared with time-based approaches and assist in assigning resources to the most critical equipment. One of the challenges in RBI is to apply it for low temperature and cryogenic applications, as there are usually no degradation mechanisms by which to determine a suitable probability of failure in the overall risk assessment. However, the assumptions used for other degradation mechanisms can be adopted to determine, qualitatively and semi-quantitatively, a consequence of failure within the risk assessment. This can assist in providing a consistent basis for the assumptions used in ensuring adequate process safety barriers and determining suitable sizing of relief devices. This presentation will discuss risk-based inspection in the context of cryogenic safety, as well as present some of the considerations for the risk assessme...

  17. An indicator-based method for quantifying farm multifunctionality

    DEFF Research Database (Denmark)

    Andersen, Peter Stubkjær; Vejre, Henrik; Dalgaard, Tommy

    2013-01-01

    multifunctionality at farm level. Four main farm functions–production, residence, provision of wildlife habitats, and recreation–are selected to describe multifunctionality. In the quantification process indicators are identified to produce four aggregated function scores based on farm characteristics and activities....... The farm data that support the indicators is derived from an interview survey conducted in 2008. The aggregated function scores vary with farm size as well as farm type; smaller, hobby-based farms in general score highest in the residence function whereas bigger, full-time farms score highest...

  18. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  19. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  20. Influence of crossover methods used by genetic algorithm-based ...

    Indian Academy of Sciences (India)

    This paper deals with solving of the selective harmonic equations (SHE) using binary coded GA specific to knowledge based neighbourhood multipoint crossover technique. This is directly related to the switching moments of the multilevel inverter under consideration. Although the previous root-finding techniques such as ...

  1. Multicriteria decision-making method based on a cosine similarity ...

    African Journals Online (AJOL)

    the cosine similarity measure is often used in information retrieval, citation analysis, and automatic classification. However, it scarcely deals with trapezoidal fuzzy information and multicriteria decision-making problems. For this purpose, a cosine similarity measure between trapezoidal fuzzy numbers is proposed based on ...

  2. A Methods-Based Biotechnology Course for Undergraduates

    Science.gov (United States)

    Chakrabarti, Debopam

    2009-01-01

    This new course in biotechnology for upper division undergraduates provides a comprehensive overview of the process of drug discovery that is relevant to biopharmaceutical industry. The laboratory exercises train students in both cell-free and cell-based assays. Oral presentations by the students delve into recent progress in drug discovery.…

  3. Intrusion detection method based on nonlinear correlation measure

    NARCIS (Netherlands)

    Ambusaidi, Mohammed A.; Tan, Zhiyuan; He, Xiangjian; Nanda, Priyadarsi; Lu, Liang Fu; Jamdagni, Aruna

    2014-01-01

    Cyber crimes and malicious network activities have posed serious threats to the entire internet and its users. This issue is becoming more critical, as network-based services, are more widespread and closely related to our daily life. Thus, it has raised a serious concern in individual internet

  4. A method for designing IRT-based item banks

    NARCIS (Netherlands)

    Boekkooi-Timminga, Ellen

    1990-01-01

    Since 1985 several procedures for computerized test construction using linear programing techniques have been described in the literature. To apply these procedures successfully, suitable item banks are needed. The problem of designing item banks based on item response theory (IRT) is addressed. A

  5. Spare Parts Demand Analysis Method Based on Field Replaceable Unit

    Science.gov (United States)

    Zhu, Min; Xu, Zijian; Guo, Ming; Li, Biao

    2017-10-01

    It is based on the reliability of spare parts optimization modeling, and the influence of failure rate, fixability, importance and availability on spare parts reserves is added to the model, and the weight of each spare part is calculated by fuzzy analytic hierarchy process.

  6. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  7. Air Base Defense: Different Times Call for Different Methods

    Science.gov (United States)

    2006-12-01

    value.27 As the war rages on in Iraq, Matthew Levitt argues that the U.S. cannot afford to be distracted by the situation there, as terrorists may...serious and more difficult to defend.101 Air bases typically employ infrared and thermal imagers, security sentries, canine patrols and motion-tracking

  8. Novel nanorods based on PANI / PEO polymers using electrospinning method

    International Nuclear Information System (INIS)

    Al-Hazeem, Nabeel Z.; Ahmed, Naser M.; Matjafri, M. Z.; Sabah, Fayroz A.; Rasheed, Hiba S.

    2016-01-01

    In this work, we fabricated nanorods by applying an electric potential on poly (ethylene oxide) (PEO) and polyaniline (PANI) as a polymeric solution by electrospinning method. Testing was conducted on the samples by field emission scanning Electron microscope (FE-SEM), X-ray diffraction (XRD) and Photoluminescence. And the results showed the emergence of nanorods in the sample within glass substrate. Diameters of nanorods have ranged between (52.78-122.40)nm And a length of between (1.15 – 1.32)μm. The emergence of so the results are for the first time, never before was the fabrication of nanorods for polymers using the same method used in this research.

  9. Realization of Chinese word segmentation based on deep learning method

    Science.gov (United States)

    Wang, Xuefei; Wang, Mingjiang; Zhang, Qiquan

    2017-08-01

    In recent years, with the rapid development of deep learning, it has been widely used in the field of natural language processing. In this paper, I use the method of deep learning to achieve Chinese word segmentation, with large-scale corpus, eliminating the need to construct additional manual characteristics. In the process of Chinese word segmentation, the first step is to deal with the corpus, use word2vec to get word embedding of the corpus, each character is 50. After the word is embedded, the word embedding feature is fed to the bidirectional LSTM, add a linear layer to the hidden layer of the output, and then add a CRF to get the model implemented in this paper. Experimental results show that the method used in the 2014 People's Daily corpus to achieve a satisfactory accuracy.

  10. A tree-based method of analysis for prospective studies.

    Science.gov (United States)

    Zhang, H; Holford, T; Bracken, M B

    1996-01-15

    Prospective studies often involve rare events as study outcomes, and a primary concern is to identify risk factors and risk groups associated with the outcomes. We discuss practical solutions to risk factor analyses in prospective studies and address strategies to determine tree structures, to estimate relative risks, and to manage missing data in connection with some important epidemiologic problems. Some of the basic ideas for our strategies follow from work of Breiman, Friedman, Olshen, and Stone, although we propose extensions to their methods to resolve some practical problems that arise in implementation of these methods in epidemiologic studies. To illustrate these ideas, we analyse low birthweight associated risk factors with use of a data set from the Yale Pregnancy Outcome Study.

  11. Novel nanorods based on PANI / PEO polymers using electrospinning method

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hazeem, Nabeel Z., E-mail: nabeelnano333@gmail.com [Nano-Optoelectronics Research and Technology Laboratory, School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Ministry of Education, the General Directorate for Educational Anbar (Iraq); Ahmed, Naser M.; Matjafri, M. Z. [Nano-Optoelectronics Research and Technology Laboratory, School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Sabah, Fayroz A. [Nano-Optoelectronics Research and Technology Laboratory, School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Department of Electrical Engineering, College of Engineering, Al-Mustansiriya University, Baghdad (Iraq); Rasheed, Hiba S. [Nano-Optoelectronics Research and Technology Laboratory, School of Physics, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia); Department of Physics, College of Education, Al-Mustansiriya University, Baghdad (Iraq)

    2016-07-06

    In this work, we fabricated nanorods by applying an electric potential on poly (ethylene oxide) (PEO) and polyaniline (PANI) as a polymeric solution by electrospinning method. Testing was conducted on the samples by field emission scanning Electron microscope (FE-SEM), X-ray diffraction (XRD) and Photoluminescence. And the results showed the emergence of nanorods in the sample within glass substrate. Diameters of nanorods have ranged between (52.78-122.40)nm And a length of between (1.15 – 1.32)μm. The emergence of so the results are for the first time, never before was the fabrication of nanorods for polymers using the same method used in this research.

  12. Simplified theory of plastic zones based on Zarka's method

    CERN Document Server

    Hübel, Hartwig

    2017-01-01

    The present book provides a new method to estimate elastic-plastic strains via a series of linear elastic analyses. For a life prediction of structures subjected to variable loads, frequently encountered in mechanical and civil engineering, the cyclically accumulated deformation and the elastic plastic strain ranges are required. The Simplified Theory of Plastic Zones (STPZ) is a direct method which provides the estimates of these and all other mechanical quantities in the state of elastic and plastic shakedown. The STPZ is described in detail, with emphasis on the fact that not only scientists but engineers working in applied fields and advanced students are able to get an idea of the possibilities and limitations of the STPZ. Numerous illustrations and examples are provided to support the reader's understanding.

  13. A study of compositional verification based IMA integration method

    Science.gov (United States)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  14. A Vision-Based Method For Analyzing Yarn Evenness

    Directory of Open Access Journals (Sweden)

    SN Niles

    2017-02-01

    Full Text Available Yarn evenness is a key factor in its performance and in the properties of the material produced from the yarn. The presence of defects in a yarn will result in the deterioration in the quality and usability of the yarn. While many methods are available to ascertain the yarn evenness many of them are tedious and dependent on the operator for its results while others though less subjective and of high speed are prohibitively expensive. This paper outlines a method which uses a cost-effective image capture device and image processing algorithms to process the captured images generate a diameter variation plot and analyse the same to count the number of thick and thin places in the yarn.

  15. Assessment of walking speed by a goniometer-based method.

    Science.gov (United States)

    Maranesi, E; Barone, V; Fioretti, S

    2014-01-01

    A quantitative gait analysis is essential to evaluate the kinematic, kinetic and electromyographic gait patterns. These patterns are strongly related to the individual spatio-temporal parameters that characterize each subject. In particular, gait speed is one of the most important spatio-temporal gait parameters: it influences kinematic, kinetic parameters, and muscle activity too. The aim of the present study is to propose a new method to assess stride speed using only 1-degree-of-freedom electrogoniometers positioned on hip and knee joints. The model validation is performed comparing the model results with those automatically obtained from another gait analysis system: GAITRite. The results underline the model reliability. These results show that essential spatio-temporal gait parameters, and in particular the speed of each stride, can be determined during normal walking using only two 1-dof electrogoniometers. The method is easy-to-use and does not interfere with regular walking patterns.

  16. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  17. A Game-Based Method for Teaching Entrepreneurship

    OpenAIRE

    Sidhu, Ikhlaq; Johnsson, Charlotta; Singer, Ken; Suoranta, Mari

    2015-01-01

    Entrepreneurship is often thought of as the act of commercializing an innovation. In modern open economies, entrepreneurship is one of the key aspects for economic growth. Teaching and learning entrepreneurship is therefore of importance and schools, colleges and universities can play an important role by including entrepreneurship and innovation in their curricula. The Berkeley Method of Entrepreneurship is a holistic and student-centered teaching and learning approach that is hypothesized t...

  18. Housing Value Forecasting Based on Machine Learning Methods

    OpenAIRE

    Mu, Jingyi; Wu, Fang; Zhang, Aihua

    2014-01-01

    In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...

  19. Methods and apparatus for hydrogen based biogas upgrading

    DEFF Research Database (Denmark)

    2013-01-01

    The present invention relates to an anaerobic process for biogas upgrading and hydrogen utilization comprising the use of acidic waste as co-substrate.In this process,H2 and CO2 will be converted to CH4, which will result in lower CO2 content in the biogas. The invention relates to both in situ...... and ex situ methods of biogas upgrading. The invention further relates to a bioreactor comprising hollow fibre membranes....

  20. A New Information Hiding Method Based on Improved BPCS Steganography

    OpenAIRE

    Sun, Shuliang

    2015-01-01

    Bit-plane complexity segmentation (BPCS) steganography is advantageous in its capacity and imperceptibility. The important step of BPCS steganography is how to locate noisy regions in a cover image exactly. The regular method, black-and-white border complexity, is a simple and easy way, but it is not always useful, especially for periodical patterns. Run-length irregularity and border noisiness are introduced in this paper to work out this problem. Canonical Cray coding (CGC) is also used to ...

  1. Robot Path Planning Method Based on Improved Genetic Algorithm

    OpenAIRE

    Mingyang Jiang; Xiaojing Fan; Zhili Pei; Jingqing Jiang; Yulan Hu; Qinghu Wang

    2014-01-01

    This paper presents an improved genetic algorithm for mobile robot path planning. The algorithm uses artificial potential method to establish the initial population, and increases value weights in the fitness function, which increases the controllability of robot path length and path smoothness. In the new algorithm, a flip mutation operator is added, which ensures the individual population collision path. Simulation results show that the proposed algorithm can get a smooth, collision-free pa...

  2. Housing Value Forecasting Based on Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Jingyi Mu

    2014-01-01

    Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.

  3. A fast, physically based method for mixing computations

    Science.gov (United States)

    Meunier, Patrice; Villermaux, Emmanuel

    2008-11-01

    We introduce a new numerical method for the study of diffusing scalar filaments in a 2D advection field. The position of the advected filament is computed kinematically, and the associated convection-diffusion problem is solved using the computed local stretching rate, assuming that the diffusing filament thickness is smaller than its local radius of curvature. This assumption reduces the numerical problem to the computation of a single variable along the filament, thus making the method extremely fast and applicable to any Peclet number. This method is then used for the mixing of a scalar in the chaotic regime of a Sine Flow, for which we relate the global quantities (spectra, concentration PDF) to the distributed stretching of the convoluted filament. The numerical results indicate that the PDF of the filament elongation is log-normal, a signature of random multiplicative processes. This property leads to exact analytical predictions for the spectrum of the field and for the PDF of the scalar concentration, in good agreement with the numerical results. These are thought to be generic of the chaotic mixing of scalars in the Batchelor regime.

  4. Polyvinylidene fluoride sensor-based method for unconstrained snoring detection.

    Science.gov (United States)

    Hwang, Su Hwan; Han, Chung Min; Yoon, Hee Nam; Jung, Da Woon; Lee, Yu Jin; Jeong, Do-Un; Park, Kwang Suk

    2015-07-01

    We established and tested a snoring detection method using a polyvinylidene fluoride (PVDF) sensor for accurate, fast, and motion-artifact-robust monitoring of snoring events during sleep. Twenty patients with obstructive sleep apnea participated in this study. The PVDF sensor was located between a mattress cover and mattress, and the patients' snoring signals were unconstrainedly measured with the sensor during polysomnography. The power ratio and peak frequency from the short-time Fourier transform were used to extract spectral features from the PVDF data. A support vector machine was applied to the spectral features to classify the data into either the snore or non-snore class. The performance of the method was assessed using manual labelling by three human observers as a reference. For event-by-event snoring detection, PVDF data that contained 'snoring' (SN), 'snoring with movement' (SM), and 'normal breathing' epochs were selected for each subject. As a result, the overall sensitivity and the positive predictive values were 94.6% and 97.5%, respectively, and there was no significant difference between the SN and SM results. The proposed method can be applied in both residential and ambulatory snoring monitoring systems.

  5. Methods for risk-based planning of O&M of wind turbines

    DEFF Research Database (Denmark)

    Nielsen, Jannie Sønderkær; Sørensen, John Dalsgaard

    2014-01-01

    , a method based on limited memory influence diagrams and a method based on the partially observable Markov decision process. The methods with decision rules based on observed variables are easy to use, but can only take the most recent observation into account, when a decision is made. The other methods can......In order to make wind energy more competitive, the big expenses for operation and maintenance must be reduced. Consistent decisions that minimize the expected costs can be made based on risk-based methods. Such methods have been implemented for maintenance planning for oil and gas structures...... take more information into account, and especially, the method based on the Markov decision process is very flexible and accurate. A case study shows that the Markov decision process and decision rules based on the probability of failure are equally good and give lower costs compared to decision rules...

  6. An algebra-based method for inferring gene regulatory networks.

    Science.gov (United States)

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the

  7. Using Problem Based Learning Methods from Engineering Education in Company Based Development

    DEFF Research Database (Denmark)

    Kofoed, Lise B.; Jørgensen, Frances

    2007-01-01

    This paper discusses how Problem-Based Learning (PBL) methods were used to support a Danish company in its efforts to become more of a 'learning organisation', characterized by sharing of knowledge and experiences. One of the central barriers to organisational learning in this company involved...... a lack of understanding for the work processes from one shift to another, and across departments. Through facilitated workshops focusing on problem-analysis and development of solutions, members of the organisation gained a greater understanding of the need for learning, as well as an increased...... motivation for sharing of experiences across organisational boundaries. The case also emphasises the importance of management involvement and support when attempting to develop a learning environment....

  8. Using Problem Based Learning Methods from Engineering Education in company based development

    DEFF Research Database (Denmark)

    Kofoed, Lise Busk; Jørgensen, Frances

    2007-01-01

    This paper discusses how Problem-Based Learning (PBL) methods were used to support a Danish company in its efforts to become more of a 'learning organisation', characterized by sharing of knowledge and experiences. One of the central barriers to organisational learning in this company involved...... a lack of understanding for the work processes from one shift to another, and across departments. Through facilitated workshops focusing on problem-analysis and development of solutions, members of the organisation gained a greater understanding of the need for learning, as well as an increased...... motivation for sharing of experiences across organisational boundaries. The case also emphasises the importance of management involvement and support when attempting to develop a learning environment....

  9. Evaluation index system of steel industry sustainable development based on entropy method and topsis method

    Science.gov (United States)

    Ronglian, Yuan; Mingye, Ai; Qiaona, Jia; Yuxuan, Liu

    2018-03-01

    Sustainable development is the only way for the development of human society. As an important part of the national economy, the steel industry is an energy-intensive industry and needs to go further for sustainable development. In this paper, we use entropy method and Topsis method to evaluate the development of China’s steel industry during the “12th Five-Year Plan” from four aspects: resource utilization efficiency, main energy and material consumption, pollution status and resource reuse rate. And we also put forward some suggestions for the development of China’s steel industry.

  10. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  11. A new method of cardiographic image segmentation based on grammar

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  12. Symmetry breaking in occupation number based slave-particle methods

    Science.gov (United States)

    Georgescu, Alexandru B.; Ismail-Beigi, Sohrab

    2017-10-01

    We describe a theoretical approach to finding spontaneously symmetry-broken electronic phases due to strong electronic interactions when using recently developed slave-particle (slave-boson) approaches based on occupation numbers. We describe why, to date, spontaneous symmetry breaking has proven difficult to achieve in such approaches. We then provide a total energy based approach for introducing auxiliary symmetry-breaking fields into the solution of the slave-particle problem that leads to lowered total energies for symmetry-broken phases. We point out that not all slave-particle approaches yield energy lowering: the slave-particle model being used must explicitly describe the degrees of freedom that break symmetry. Finally, our total energy approach permits us to greatly simplify the formalism used to achieve a self-consistent solution between spinon and slave modes while increasing the numerical stability and greatly speeding up the calculations.

  13. Method of polishing nickel-base alloys and stainless steels

    Science.gov (United States)

    Steeves, Arthur F.; Buono, Donald P.

    1981-01-01

    A chemical attack polish and polishing procedure for use on metal surfaces such as nickel base alloys and stainless steels. The chemical attack polish comprises Fe(NO.sub.3).sub.3, concentrated CH.sub.3 COOH, concentrated H.sub.2 SO.sub.4 and H.sub.2 O. The polishing procedure includes saturating a polishing cloth with the chemical attack polish and submicron abrasive particles and buffing the metal surface.

  14. Quality Management in the Knowledge Based Economy – Kaizen Method

    OpenAIRE

    Popa Liliana Viorica

    2010-01-01

    In the knowledge based economy, organisations are influenced by the quality movement, Kaizen, which plays a strategic role for optimization of organizational capabilities of managers as well as of all the employees. Kaizen represents the philosophy of continuous improvement, until the economic activities inside the organisation reach zero deficiencies. In order to implement in a proper manner Kaizen into the organization, managers must decide what to improve, why to improve, who shall improve...

  15. Availability-based payback method for energy efficiency measures

    OpenAIRE

    Kasprowicz, Robert; Schulz, Carolin

    2015-01-01

    Energy-efficient technologies can lead to high energy and monetary savings in numerous industries. However, a lot of potential identified in industries remains untapped due to comparatively short requested payback periods. Usually, companies base the calculation of their payback period on initial investment costs in conjunction with annual monetary energy savings. Energy efficiency measures, however, often lead to synergy effects which are not taken into account. Against this background, we i...

  16. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Science.gov (United States)

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  17. The Operation Method of Smarter City Based on Ecological Theory

    Science.gov (United States)

    Fan, C.; Fan, H. Y.

    2017-10-01

    As the city and urbanization’s accelerated pace has caused galloping population, the urban framework is extending with increasingly complex social problems. The urban management tends to become complicated and the governance seems more difficult to pursue. exploring the urban management’s new model has attracted local governments’ urgent attention. tcombines the guiding ideology and that management’s practices based on ecological theory, explains the Smarter city Ecology Managementmodel’s formation, makes modern urban management’s comparative analysis and further defines the aforesaid management mode’s conceptual model. Based on the smarter city system theory’s ecological carrying capacity, the author uses mathematical model to prove the coordination relationship between the smarter city Ecology Managementmode’s subsystems, demonstrates that it can improve the urban management’s overall level, emphasizes smarter city management integrity, believing that urban system’s optimization is based on each subsystem being optimized, attaching the importance to elements, structure, and balance between each subsystem and between internal elements. Through the establishment of the smarter city Ecology Managementmodel’s conceptual model and theoretical argumentation, it provides a theoretical basis and technical guidance to that model’s innovation.

  18. A Surface-Based Spatial Registration Method Based on Sense Three-Dimensional Scanner.

    Science.gov (United States)

    Fan, Yifeng; Xu, Xiufang; Wang, Manning

    2017-01-01

    The purpose of this study was to investigate the feasibility of a surface-based registration method based on a low-cost, hand-held Sense three-dimensional (3D) scanner in image-guided neurosurgery system. The scanner was calibrated prior and fixed on a tripod before registration. During registration, a part of the head surface was scanned at first and the spatial position of the adapter was recorded. Then the scanner was taken off from the tripod and the entire head surface was scanned by moving the scanner around the patient's head. All the scan points were aligned to the recorded spatial position to form a unique point cloud of the head by the automatic mosaic function of the scanner. The coordinates of the scan points were transformed from the device space to the adapter space by a calibration matrix, and then to the patient space. A 2-step patient-to-image registration method was then performed to register the patient space to the image space. The experimental results showed that the mean target registration error of 15 targets on the surface of the phantom was 1.61±0.09 mm. In a clinical experiment, the mean target registration error of 7 targets on the patient's head surface was 2.50±0.31 mm, which was sufficient to meet clinical requirements. It is feasible to use the Sense 3D scanner for patient-to-image registration, and the low-cost Sense 3D scanner can take the place of the current used scanner in the image-guided neurosurgery system.

  19. Internet-based versus traditional teaching and learning methods.

    Science.gov (United States)

    Guarino, Salvatore; Leopardi, Eleonora; Sorrenti, Salvatore; De Antoni, Enrico; Catania, Antonio; Alagaratnam, Swethan

    2014-10-01

    The rapid and dramatic incursion of the Internet and social networks in everyday life has revolutionised the methods of exchanging data. Web 2.0 represents the evolution of the Internet as we know it. Internet users are no longer passive receivers, and actively participate in the delivery of information. Medical education cannot evade this process. Increasingly, students are using tablets and smartphones to instantly retrieve medical information on the web or are exchanging materials on their Facebook pages. Medical educators cannot ignore this continuing revolution, and therefore the traditional academic schedules and didactic schemes should be questioned. Analysing opinions collected from medical students regarding old and new teaching methods and tools has become mandatory, with a view towards renovating the process of medical education. A cross-sectional online survey was created with Google® docs and administrated to all students of our medical school. Students were asked to express their opinion on their favourite teaching methods, learning tools, Internet websites and Internet delivery devices. Data analysis was performed using spss. The online survey was completed by 368 students. Although textbooks remain a cornerstone for training, students also identified Internet websites, multimedia non-online material, such as the Encyclopaedia on CD-ROM, and other non-online computer resources as being useful. The Internet represented an important aid to support students' learning needs, but textbooks are still their resource of choice. Among the websites noted, Google and Wikipedia significantly surpassed the peer-reviewed medical databases, and access to the Internet was primarily through personal computers in preference to other Internet access devices, such as mobile phones and tablet computers. Increasingly, students are using tablets and smartphones to instantly retrieve medical information. © 2014 John Wiley & Sons Ltd.

  20. Vehicle target detection method based on the average optical flow

    Science.gov (United States)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Moving target detection in image sequence for dynamic scene is an important research topic in the field of computer vision. Block projection and matching are utilized for global motion estimation. Then, the background image is compensated applying the estimated motion parameters so as to stabilize the image sequence. Consequently, background subtraction is employed in the stabilized image sequence to extract moving targets. Finally, divide the difference image into uniform grids and average optical flow is employed for motion analysis. Experiment tests show that the proposed average optical flow method can efficiently extract the vehicle targets from dynamic scene meanwhile decreasing the false alarm.

  1. Resonant fiber based aerosol particle sensor and method

    DEFF Research Database (Denmark)

    2013-01-01

    in resonance frequency due to depositing of nano-sized particles is correlated with the mass deposited on the elongate member and the vibration frequency of the elongate member is determined by a detector. The read-out from the detector is transformed into a mass deposited on the elongate member. Particles......The present invention relates to methods and devices for determining the weight of small particles, typically being nano-sized particles by use of resonating fibers in the form of elongate members being driven into resonance by an actuator or e.g. thermal noise/fluctuation. The frequency shift...

  2. Formation of Reflecting Surfaces Based on Spline Methods

    Science.gov (United States)

    Zamyatin, A. V.; Zamyatina, E. A.

    2017-11-01

    The article deals with problem of reflecting barriers surfaces generation by spline methods. The cases of reflection when a geometric model is applied are considered. The surfaces of reflecting barriers are formed in such a way that they contain given points and the rays reflected at these points and hit at the defined points of specified surface. The reflecting barrier surface is formed by cubic splines. It enables a comparatively simple implementation of proposed algorithms in the form of software applications. The algorithms developed in the article can be applied in architecture and construction design for reflecting surface generation in optics and acoustics providing the geometrical model of reflex processes is used correctly.

  3. Differentiated protection method in passive optical networks based on OPEX

    Science.gov (United States)

    Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2011-12-01

    Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.

  4. The Methods of Information Security Based on Blurring of System

    Directory of Open Access Journals (Sweden)

    Mikhail Andreevich Styugin

    2016-03-01

    Full Text Available The paper present the model of researching system with own known input, output and set of discrete internal states. These theoretical objects like an absolutely protected from research system and an absolutely indiscernible data transfer channel are defined. Generalization of the principle of Shannon Secrecy are made. The method of system blurring is defined. Theoretically cryptographically strong of absolutely indiscernible data transfer channel is proved and its practical unbreakable against unreliable pseudo random number generator is shown. This paper present system with blurring of channel named Pseudo IDTC and shown asymptotic complexity of break this system compare with AES and GOST.

  5. Robot Path Planning Method Based on Improved Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mingyang Jiang

    2014-03-01

    Full Text Available This paper presents an improved genetic algorithm for mobile robot path planning. The algorithm uses artificial potential method to establish the initial population, and increases value weights in the fitness function, which increases the controllability of robot path length and path smoothness. In the new algorithm, a flip mutation operator is added, which ensures the individual population collision path. Simulation results show that the proposed algorithm can get a smooth, collision-free path to the global optimum, the path planning algorithm which is used to solve the problem is effective and feasible.

  6. An aquaculture-based method for calibrated bivalve isotope paleothermometry

    Science.gov (United States)

    Wanamaker, Alan D.; Kreutz, Karl J.; Borns, Harold W.; Introne, Douglas S.; Feindel, Scott; Barber, Bruce J.

    2006-09-01

    To quantify species-specific relationships between bivalve carbonate isotope geochemistry (δ18Oc) and water conditions (temperature and salinity, related to water isotopic composition [δ18Ow]), an aquaculture-based methodology was developed and applied to Mytilus edulis (blue mussel). The four-by-three factorial design consisted of four circulating temperature baths (7, 11, 15, and 19°C) and three salinity ranges (23, 28, and 32 parts per thousand (ppt); monitored for δ18Ow weekly). In mid-July of 2003, 4800 juvenile mussels were collected in Salt Bay, Damariscotta, Maine, and were placed in each configuration. The size distribution of harvested mussels, based on 105 specimens, ranged from 10.9 mm to 29.5 mm with a mean size of 19.8 mm. The mussels were grown in controlled conditions for up to 8.5 months, and a paleotemperature relationship based on juvenile M. edulis from Maine was developed from animals harvested at months 4, 5, and 8.5. This relationship [T°C = 16.19 (±0.14) - 4.69 (±0.21) {δ18Oc VPBD - δ18Ow VSMOW} + 0.17 (±0.13) {δ18Oc VPBD - δ18Ow VSMOW}2; r2 = 0.99; N = 105; P < 0.0001] is nearly identical to the Kim and O'Neil (1997) abiogenic calcite equation over the entire temperature range (7-19°C), and it closely resembles the commonly used paleotemperature equations of Epstein et al. (1953) and Horibe and Oba (1972). Further, the comparison of the M. edulis paleotemperature equation with the Kim and O'Neil (1997) equilibrium-based equation indicates that M. edulis specimens used in this study precipitated their shell in isotopic equilibrium with ambient water within the experimental uncertainties of both studies. The aquaculture-based methodology described here allows similar species-specific isotope paleothermometer calibrations to be performed with other bivalve species and thus provides improved quantitative paleoenvironmental reconstructions.

  7. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    Science.gov (United States)

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-09-10

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  8. New methods of magnet-based instrumentation for NOTES.

    Science.gov (United States)

    Magdeburg, Richard; Hauth, Daniel; Kaehler, Georg

    2013-12-01

    Laparoscopic surgery has displaced open surgery as the standard of care for many clinical conditions. NOTES has been described as the next surgical frontier with the objective of incision-free abdominal surgery. The principal challenge of NOTES procedures is the loss of triangulation and instrument rigidity, which is one of the fundamental concepts of laparoscopic surgery. To overcome these problems necessitates the development of new instrumentation. material and methods: We aimed to assess the use of a very simple combination of internal and external magnets that might allow the vigorous multiaxial traction/counter-traction required in NOTES procedures. The magnet retraction system consisted of an external magnetic assembly and either small internal magnets attached by endoscopic clips to the designated tissue (magnet-clip-approach) or an endoscopic grasping forceps in a magnetic deflector roll (magnet-trocar-approach). We compared both methods regarding precision, time and efficacy by performing transgastric partial uterus resections with better results for the magnet-trocar-approach. This proof-of-principle animal study showed that the combination of external and internal magnets generates sufficient coupling forces at clinically relevant abdominal wall thicknesses, making them suitable for use and evaluation in NOTES procedures, and provides the vigorous multiaxial traction/counter-traction required by the lack of additional abdominal trocars.

  9. Imaging Method Based on Time Reversal Channel Compensation

    Directory of Open Access Journals (Sweden)

    Bing Li

    2015-01-01

    Full Text Available The conventional time reversal imaging (TRI method builds imaging function by using the maximal value of signal amplitude. In this circumstance, some remote targets are missed (near-far problem or low resolution is obtained in lossy and/or dispersive media, and too many transceivers are employed to locate targets, which increases the complexity and cost of system. To solve these problems, a novel TRI algorithm is presented in this paper. In order to achieve a high resolution, the signal amplitude corresponding to focal time observed at target position is used to reconstruct the target image. For disposing near-far problem and suppressing spurious images, combining with cross-correlation property and amplitude compensation, channel compensation function (CCF is introduced. Moreover, the complexity and cost of system are reduced by employing only five transceivers to detect four targets whose number is close to that of transceivers. For the sake of demonstrating the practicability of the proposed analytical framework, the numerical experiments are actualized in both nondispersive-lossless (NDL media and dispersive-conductive (DPC media. Results show that the performance of the proposed method is superior to that of conventional TRI algorithm even under few echo signals.

  10. Spectral imaging-based methods for quantifying autophagy and apoptosis.

    Science.gov (United States)

    Dolloff, Nathan G; Ma, Xiahong; Dicker, David T; Humphreys, Robin C; Li, Lin Z; El-Deiry, Wafik S

    2011-08-15

    Spectral imaging systems are capable of detecting and quantifying subtle differences in light quality. In this study we coupled spectral imaging with fluorescence and white light microscopy to develop new methods for quantifying autophagy and apoptosis. For autophagy, we employed multispectral imaging to examine spectral changes in the fluorescence of LC3-GFP, a chimeric protein commonly used to track autophagosome formation. We found that punctate autophagosome-associated LC3-GFP exhibited a spectral profile that was distinctly different from diffuse cytosolic LC3-GFP. We then exploited this shift in spectral quality to quantify the amount of autophagosome-associated signal in single cells. Hydroxychloroquine (CQ), an anti-malarial agent that increases autophagosomal number, significantly increased the punctate LC3-GFP spectral signature, providing proof-of-principle for this approach. For studying apoptosis, we employed the Prism and Reflector Imaging Spectroscopy System (PARISS) hyperspectral imaging system to identify a spectral signature for active caspase-8 immunostaining in ex vivo tumor samples. This system was then used to rapidly quantify apoptosis induced by lexatumumab, an agonistic TRAIL-R2/DR5 antibody, in histological sections from a preclinical mouse model. We further found that the PARISS could accurately distinguish apoptotic tumor regions in hematoxylin and eosin-stained sections, which allowed us to quantify death receptor-mediated apoptosis in the absence of an apoptotic marker. These spectral imaging systems provide unbiased, quantitative and fast means for studying autophagy and apoptosis and complement the existing methods in their respective fields.

  11. Adaptive designs based on the truncated product method

    Directory of Open Access Journals (Sweden)

    Neuhäuser Markus

    2005-09-01

    Full Text Available Abstract Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.

  12. Robotic fish tracking method based on suboptimal interval Kalman filter

    Science.gov (United States)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  13. Dimensionality reduction method based on a tensor model

    Science.gov (United States)

    Yan, Ronghua; Peng, Jinye; Ma, Dongmei; Wen, Desheng

    2017-04-01

    Dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis reduces the spectral dimension and does not utilize the spatial information of an HSI. Both spatial and spectral information are used when an HSI is modeled as a tensor, that is, the noise in the spatial dimension is decreased and the dimension in a spectral dimension is reduced simultaneously. However, this model does not consider factors affecting the spectral signatures of ground objects. This means that further improving classification is very difficult. The authors propose that the spectral signatures of ground objects are the composite result of multiple factors, such as illumination, mixture, atmospheric scattering and radiation, and so on. In addition, these factors are very difficult to distinguish. Therefore, these factors are synthesized as within-class factors. Within-class factors, class factors, and pixels are selected to model a third-order tensor. Experimental results indicate that the classification accuracy of the new method is higher than that of the previous methods.

  14. Unsupervised Segmentation of Greenhouse Plant Images Based on Statistical Method.

    Science.gov (United States)

    Zhang, Ping; Xu, Lihong

    2018-03-13

    Complicated image scene of the agricultural greenhouse plant images makes it very difficult to obtain precise manual labeling, leading to the hardship of getting the accurate training set of the conditional random field (CRF). Considering this problem, this paper proposed an unsupervised conditional random field image segmentation algorithm ULCRF (Unsupervised Learning Conditional Random Field), which can perform fast unsupervised segmentation of greenhouse plant images, and further the plant organs in the image, i.e. fruits, leaves and stems, are segmented. The main idea of this algorithm is to calculate the unary potential, namely the initial label of the Dense CRF, by the unsupervised learning model LDA (Latent Dirichlet Allocation). In view of the ever-changing image features at different stages of fruit growth, a multi-resolution ULCRF is proposed to improve the accuracy of image segmentation in the middle stage and late stage of the fruit growth. An image is down-sampled twice to obtain three layers of different resolution images, and the features of each layer are interrelated with each other. Experiment results show that the proposed method can segment greenhouse plant images in an unsupervised method automatically and obtain a high segmentation accuracy together with a high extraction precision of the fruit part.

  15. Evaluating bull fertility based on non-return method

    Directory of Open Access Journals (Sweden)

    Prka Igor

    2012-01-01

    Full Text Available In order to evaluate the results of reproductive cows and heifers, different parameters of fertility are used, such as the service period, insemination index, intercalving time and others, and of the breeding bulls the values obtained through non-return. An ejaculate is taken up for further processing by veterinary centres only provided it meets the prescribed quality parameters. Rating semen parameters includes a macroscopic (volume, colour, consistency, smell and pH and a microscopic evaluation (mobility, density, percentage of live sperm and abnormal and damaged sperm. In addition to sperm quality and the fertility of the female animal, the results of the non-return method are also influenced by a number of exogenous causes (season, age, race, insemination techniques that have no small impact on the end result of insemination - pregnancy. In order to obtain more objective results of the fertility of bulls the following tasks were undertaken, namely: 1. to calculate with the non-return method the fertility of bulls in over 10,000 cows inseminated for the first time during a period of 6 years; and 2. to analyze the impact of semen quality, season, age of cow and bull, and the bull breed on the results of fertility.

  16. Method for forming bismuth-based superconducting ceramics

    Science.gov (United States)

    Maroni, Victor A.; Merchant, Nazarali N.; Parrella, Ronald D.

    2005-05-17

    A method for reducing the concentration of non-superconducting phases during the heat treatment of Pb doped Ag/Bi-2223 composites having Bi-2223 and Bi-2212 superconducting phases is disclosed. A Pb doped Ag/Bi-2223 composite having Bi-2223 and Bi-2212 superconducting phases is heated in an atmosphere having an oxygen partial pressure not less than about 0.04 atmospheres and the temperature is maintained at the lower of a non-superconducting phase take-off temperature and the Bi-2223 superconducting phase grain growth take-off temperature. The oxygen partial pressure is varied and the temperature is varied between about 815.degree. C. and about 835.degree. C. to produce not less than 80 percent conversion to Pb doped Bi-2223 superconducting phase and not greater than about 20 volume percent non-superconducting phases. The oxygen partial pressure is preferably varied between about 0.04 and about 0.21 atmospheres. A product by the method is disclosed.

  17. Sparse representation-based color visualization method for hyperspectral imaging

    Science.gov (United States)

    Wang, Li-Guo; Liu, Dan-Feng; Zhao, Liang

    2013-06-01

    In this paper, we designed a color visualization model for sparse representation of the whole hyperspectral image, in which, not only the spectral information in the sparse representation but also the spatial information of the whole image is retained. After the sparse representation, the color labels of the effective elements of the sparse coding dictionary are selected according to the sparse coefficient and then the mixed images are displayed. The generated images maintain spectral distance preservation and have good separability. For local ground objects, the proposed single-pixel mixed array and improved oriented sliver textures methods are integrated to display the specific composition of each pixel. This avoids the confusion of the color presentation in the mixed-pixel color display and can also be used to reconstruct the original hyperspectral data. Finally, the model effectiveness was proved using real data. This method is promising and can find use in many fields, such as energy exploration, environmental monitoring, disaster warning, and so on.

  18. Speckle reduction methods in laser-based picture projectors

    Science.gov (United States)

    Akram, M. Nadeem; Chen, Xuyuan

    2016-02-01

    Laser sources have been promised for many years to be better light sources as compared to traditional lamps or light-emitting diodes (LEDs) for projectors, which enable projectors having wide colour gamut for vivid image, super brightness and high contrast for the best picture quality, long lifetime for maintain free operation, mercury free, and low power consumption for green environment. A major technology obstacle in using lasers for projection has been the speckle noise caused by to the coherent nature of the lasers. For speckle reduction, current state of the art solutions apply moving parts with large physical space demand. Solutions beyond the state of the art need to be developed such as integrated optical components, hybrid MOEMS devices, and active phase modulators for compact speckle reduction. In this article, major methods reported in the literature for the speckle reduction in laser projectors are presented and explained. With the advancement in semiconductor lasers with largely reduced cost for the red, green and the blue primary colours, and the developed methods for their speckle reduction, it is hoped that the lasers will be widely utilized in different projector applications in the near future.

  19. RONI-based steganographic method for 3D scene

    Science.gov (United States)

    Li, Xiao-Wei; Wang, Qiong-Hua

    2017-06-01

    Image steganography is one way of data hiding which provides data security in digital images. The aim is to embed and deliver secret data in digital images without any suspiciousness. However, most of the existing optical image hiding methods ignore the visual quality of the stego-image for improving the robustness of the secret image. To address this issue, in this paper, we present a Region of Non-Interest (RONI) steganographic algorithm to enhance the visual quality of the stego-image. In the proposed method, the carrier image is segmented into Region of Interest (ROI) and RONI. To enhance the visual quality, the 3D image information is embedded into the RONI of the digital images. In order to find appropriate regions for embedding, we use a visual attention model as a means of measuring the ROI of the digital images. The algorithm employs the computational integral imaging (CII) technique to hide the 3D scene in the carrier image. Comparison results show that the proposed technique performs better than some existing state of art techniques.

  20. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  1. Genetic-evolution-based optimization methods for engineering design

    Science.gov (United States)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  2. Reliability-Based Shape Optimization using Stochastic Finite Element Methods

    DEFF Research Database (Denmark)

    Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.

    1991-01-01

    stochastic fields (e.g. loads and material parameters such as Young's modulus and the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian & Ke (6) and Liu & Der Kiureghian...... (7). In this paper a reliability-based shape optimization problem is formulated with the total expected cost as objective function and some requirements for the reliability measures (element or systems reliability measures) as constraints, see section 2. As design variables sizing variables...

  3. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    Science.gov (United States)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  4. METHOD FOR STEREO MAPPING BASED ON OBJECTARX AND PIPELINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    F. Liu

    2012-07-01

    Full Text Available Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation, the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  5. PRETTY: Grazing altimetry measurements based on the interferometric method

    DEFF Research Database (Denmark)

    Høeg, Per; Fragner, Heinrich; Dielacher, Andreas

    2017-01-01

    dimensions can be avoided, makes them suitable forsmall satellite missions. Applications where a continuous high coverage is needed, as for example disaster warning,have the demand for a large number of satellites in orbit, which in turn requires small and relatively low cost satellites.The proposed PRETTY...... (Passive Reflectometry and Dosimetry) mission includes a demonstrator payload for passivereflectometry and scatterometry focusing on very low incidence angles whereby the direct and reflected signal will bereceived via the same antenna. The correlation of both signals will be done by a specific FPGA based...

  6. Harbourscape Aalborg - Design Based Methods in Waterfront Development

    DEFF Research Database (Denmark)

    Kiib, Hans

    2012-01-01

    How can city planners and developers gain knowledge and develop new sustainable concepts for water front developments? The waterfront is far too often threatened by new privatisation, lack of public access and bad architecture. And in a time where low growth rates and crises in the building...... an independent contribution to the visioning process on urban development, city life planning and landscaping, but this has to be based on a “non-dogmatic” approach in the architectural and urban space design. This involves, among other aspects, the combination of independent evaluation and discourse analyses...

  7. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    Science.gov (United States)

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Spectral methods and cluster structure in correlation-based networks

    Science.gov (United States)

    Heimo, Tapio; Tibély, Gergely; Saramäki, Jari; Kaski, Kimmo; Kertész, János

    2008-10-01

    We investigate how in complex systems the eigenpairs of the matrices derived from the correlations of multichannel observations reflect the cluster structure of the underlying networks. For this we use daily return data from the NYSE and focus specifically on the spectral properties of weight W=|-δ and diffusion matrices D=W/sj-δ, where C is the correlation matrix and si=∑jW the strength of node j. The eigenvalues (and corresponding eigenvectors) of the weight matrix are ranked in descending order. As in the earlier observations, the first eigenvector stands for a measure of the market correlations. Its components are, to first approximation, equal to the strengths of the nodes and there is a second order, roughly linear, correction. The high ranking eigenvectors, excluding the highest ranking one, are usually assigned to market sectors and industrial branches. Our study shows that both for weight and diffusion matrices the eigenpair analysis is not capable of easily deducing the cluster structure of the network without a priori knowledge. In addition we have studied the clustering of stocks using the asset graph approach with and without spectrum based noise filtering. It turns out that asset graphs are quite insensitive to noise and there is no sharp percolation transition as a function of the ratio of bonds included, thus no natural threshold value for that ratio seems to exist. We suggest that these observations can be of use for other correlation based networks as well.

  9. Measuring Care Continuity: A Comparison of Claims-based Methods.

    Science.gov (United States)

    Pollack, Craig E; Hussey, Peter S; Rudin, Robert S; Fox, D Steven; Lai, Julie; Schneider, Eric C

    2016-05-01

    Assessing care continuity is important in evaluating the impact of health care reform and changes to health care delivery. Multiple measures of care continuity have been developed for use with claims data. This study examined whether alternative continuity measures provide distinct assessments of coordination within predefined episodes of care. This was a retrospective cohort study using 2008-2009 claims files for a national 5% sample of beneficiaries with congestive heart failure, chronic obstructive pulmonary disease, and diabetes mellitus. Correlations among 4 measures of care continuity-the Bice-Boxerman Continuity of Care Index, Herfindahl Index, usual provider of care, and Sequential Continuity of Care Index-were derived at the provider- and practice-levels. Across the 3 conditions, results on 4 claims-based care coordination measures were highly correlated at the provider-level (Pearson correlation coefficient r=0.87-0.98) and practice-level (r=0.75-0.98). Correlation of the results was also high for the same measures between the provider- and practice-levels (r=0.65-0.92). Claims-based care continuity measures are all highly correlated with one another within episodes of care.

  10. A thyroid nodule classification method based on TI-RADS

    Science.gov (United States)

    Wang, Hao; Yang, Yang; Peng, Bo; Chen, Qin

    2017-07-01

    Thyroid Imaging Reporting and Data System(TI-RADS) is a valuable tool for differentiating the benign and the malignant thyroid nodules. In clinic, doctors can determine the extent of being benign or malignant in terms of different classes by using TI-RADS. Classification represents the degree of malignancy of thyroid nodules. TI-RADS as a classification standard can be used to guide the ultrasonic doctor to examine thyroid nodules more accurately and reliably. In this paper, we aim to classify the thyroid nodules with the help of TI-RADS. To this end, four ultrasound signs, i.e., cystic and solid, echo pattern, boundary feature and calcification of thyroid nodules are extracted and converted into feature vectors. Then semi-supervised fuzzy C-means ensemble (SS-FCME) model is applied to obtain the classification results. The experimental results demonstrate that the proposed method can help doctors diagnose the thyroid nodules effectively.

  11. Method for preparing dioxyheterocycle-based electrochromic polymers

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, John R.; Estrada, Leandro; Deininger, James; Arroyave-Mondragon, Frank Antonio

    2017-10-17

    A method for preparing a conjugated polymer involves a DHAP polymerization of a 3,4-dioxythiophene, 3,4-dioxyfuran, or 3,4-dioxypyrrole and, optionally, at least one second conjugated monomer in the presence of a Pd or Ni comprising catalyst, an aprotic solvent, a carboxylic acid at a temperature in excess of 120.degree. C. At least one of the monomers is substituted with hydrogen reactive functionalities and at least one of the monomers is substituted with a Cl, Br, and/or I. The polymerization can be carried out at temperature of 140.degree. C. or more, and the DHAP polymerization can be carried out without a phosphine ligand or a phase transfer agent. The resulting polymer can display dispersity less than 2 and have a degree of polymerization in excess of 10.

  12. Fourier-Based Fast Multipole Method for the Helmholtz Equation

    KAUST Repository

    Cecka, Cris

    2013-01-01

    The fast multipole method (FMM) has had great success in reducing the computational complexity of solving the boundary integral form of the Helmholtz equation. We present a formulation of the Helmholtz FMM that uses Fourier basis functions rather than spherical harmonics. By modifying the transfer function in the precomputation stage of the FMM, time-critical stages of the algorithm are accelerated by causing the interpolation operators to become straightforward applications of fast Fourier transforms, retaining the diagonality of the transfer function, and providing a simplified error analysis. Using Fourier analysis, constructive algorithms are derived to a priori determine an integration quadrature for a given error tolerance. Sharp error bounds are derived and verified numerically. Various optimizations are considered to reduce the number of quadrature points and reduce the cost of computing the transfer function. © 2013 Society for Industrial and Applied Mathematics.

  13. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  14. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    Every consumer wants fresh ham and the way we decide whether the meat is fresh or not is by looking at the color. The producers of ham wants a long shelf life, meaning they want the ham to look fresh for a long time. The Danish company Danisco is therefore trying to develop optimal storing...... conditions and finding useful additives to hinder the color to change rapidly. To be able to prove which methods of storing and additives work, Danisco wants to monitor the development of the color of meat in a slice of ham as a function of time, environment and ingredients. We have chosen to use multi...... spectral images to monitor the change in color. We therefore have to be able to segment the ham into the dierent categories of which the ham consists. These categories include fat, gristle and two dierent types of meat. This segmentation is difficult when using the traditional orthogonal transformation...

  15. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  16. An experiment teaching method based on the Optisystem simulation platform

    Science.gov (United States)

    Zhu, Jihua; Xiao, Xuanlu; Luo, Yuan

    2017-08-01

    The experiment teaching of optical communication system is difficult to achieve because of expensive equipment. The Optisystem is optical communication system design software, being able to provide such a simulation platform. According to the characteristic of the OptiSystem, an approach of experiment teaching is put forward in this paper. It includes three gradual levels, the basics, the deeper looks and the practices. Firstly, the basics introduce a brief overview of the technology, then the deeper looks include demoes and example analyses, lastly the practices are going on through the team seminars and comments. A variety of teaching forms are implemented in class. The fact proves that this method can not only make up the laboratory but also motivate the students' learning interest and improve their practical abilities, cooperation abilities and creative spirits. On the whole, it greatly raises the teaching effect.

  17. Control and Driving Methods for LED Based Intelligent Light Sources

    DEFF Research Database (Denmark)

    Beczkowski, Szymon

    High power light-emitting diodes allow the creation of luminaires capable of generating saturated colour light at very high efficacies. Contrary to traditional light sources like incandescent and high-intensity discharge lamps, where colour is generated using filters, LEDs use additive light mixing......, where the intensity of each primary colour diode has to be adjusted to the needed intensity to generate specified colour. The function of LED driver is to supply the diode with power needed to achieve the desired intensity. Typically, the drivers operate as a current source and the intensity...... of the diode is controlled either by varying the magnitude of the current or by driving the LED with a pulsed current and regulate the width of the pulse. It has been shown previously, that these two methods yield different effects on diode's efficacy and colour point. A hybrid dimming strategy has been...

  18. Multiattribute Grey Target Decision Method Based on Soft Set Theory

    Directory of Open Access Journals (Sweden)

    Xia Wang

    2014-01-01

    Full Text Available With respect to the Multiattribute decision-making problems in which the evaluation attribute sets are different and the evaluating values of alternatives are interval grey numbers, a multiattribute grey target decision-making method in which the attribute sets are different was proposed. The concept of grey soft set was defined, and its “AND” operation was assigned by combining the intersection operation of grey number. The expression approach of new grey soft set of attribute sets considering by all decision makers were gained by applying the “AND” operation of grey soft set, and the weights of synthesis attribute were proved. The alternatives were ranked according to the size of distance of bull’s eyes of each alternative under synthetic attribute sets. The green supplier selection was illustrated to demonstrate the effectiveness of proposed model.

  19. Recognition method of construction conflict based on driver's eye movement.

    Science.gov (United States)

    Xu, Yi; Li, Shiwu; Gao, Song; Tan, Derong; Guo, Dong; Wang, Yuqiong

    2018-04-01

    Drivers eye movement data in simulated construction conflicts at different speeds were collected and analyzed to find the relationship between the drivers' eye movement and the construction conflict. On the basis of the relationship between the drivers' eye movement and the construction conflict, the peak point of wavelet processed pupil diameter, the first point on the left side of the peak point and the first blink point after the peak point are selected as key points for locating construction conflict periods. On the basis of the key points and the GSA, a construction conflict recognition method so called the CCFRM is proposed. And the construction conflict recognition speed and location accuracy of the CCFRM are verified. The good performance of the CCFRM verified the feasibility of proposed key points in construction conflict recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Experimental method to predict avalanches based on neural networks

    Directory of Open Access Journals (Sweden)

    V. V. Zhdanov

    2016-01-01

    Full Text Available The article presents results of experimental use of currently available statistical methods to classify the avalanche‑dangerous precipitations and snowfalls in the Kishi Almaty river basin. The avalanche service of Kazakhstan uses graphical methods for prediction of avalanches developed by I.V. Kondrashov and E.I. Kolesnikov. The main objective of this work was to develop a modern model that could be used directly at the avalanche stations. Classification of winter precipitations into dangerous snowfalls and non‑dangerous ones was performed by two following ways: the linear discriminant function (canonical analysis and artificial neural networks. Observational data on weather and avalanches in the gorge Kishi Almaty in the gorge Kishi Almaty were used as a training sample. Coefficients for the canonical variables were calculated by the software «Statistica» (Russian version 6.0, and then the necessary formula had been constructed. The accuracy of the above classification was 96%. Simulator by the authors L.N. Yasnitsky and F.М. Cherepanov was used to learn the neural networks. The trained neural network demonstrated 98% accuracy of the classification. Prepared statistical models are recommended to be tested at the snow‑avalanche stations. Results of the tests will be used for estimation of the model quality and its readiness for the operational work. In future, we plan to apply these models for classification of the avalanche danger by the five‑point international scale.