WorldWideScience

Sample records for measured algorithm performance

  1. A comparison of performance measures for online algorithms

    DEFF Research Database (Denmark)

    Boyar, Joan; Irani, Sandy; Larsen, Kim Skak

    2009-01-01

    is to balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....

  2. Performance Evaluation of Proportional Fair Scheduling Algorithm with Measured Channels

    DEFF Research Database (Denmark)

    Sørensen, Troels Bundgaard; Pons, Manuel Rubio

    2005-01-01

    subjected to measured channel traces. Specifically, we applied measured signal fading recorded from GSM cell phone users making calls on an indoor wireless office system. Different from reference channel models, these measured channels have much more irregular fading between users, which as we show...

  3. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  4. Performance measurement, modeling, and evaluation of integrated concurrency control and recovery algorithms in distributed data base systems

    Energy Technology Data Exchange (ETDEWEB)

    Jenq, B.C.

    1986-01-01

    The performance evaluation of integrated concurrency-control and recovery mechanisms for distributed data base systems is studied using a distributed testbed system. In addition, a queueing network model was developed to analyze the two phase locking scheme in the distributed testbed system. The combination of testbed measurement and analytical modeling provides an effective tool for understanding the performance of integrated concurrency control and recovery algorithms in distributed database systems. The design and implementation of the distributed testbed system, CARAT, are presented. The concurrency control and recovery algorithms implemented in CARAT include: a two phase locking scheme with distributed deadlock detection, a distributed version of optimistic approach, before-image and after-image journaling mechanisms for transaction recovery, and a two-phase commit protocol. Many performance measurements were conducted using a variety of workloads. A queueing network model is developed to analyze the performance of the CARAT system using the two-phase locking scheme with before-image journaling. The combination of testbed measurements and analytical modeling provides significant improvements in understanding the performance impacts of the concurrency control and recovery algorithms in distributed database systems.

  5. Performance of the Falling Snow Retrieval Algorithms for the Global Precipitation Measurement (GPM) Mission

    Science.gov (United States)

    Skofronick-Jackson, Gail; Munchak, Stephen J.; Ringerud, Sarah

    2016-01-01

    Retrievals of falling snow from space represent an important data set for understanding the Earth's atmospheric, hydrological, and energy cycles, especially during climate change. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new and retrievals are still undergoing development with challenges remaining). This work reports on the development and testing of retrieval algorithms for the Global Precipitation Measurement (GPM) mission Core Satellite, launched February 2014.

  6. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  7. Learning Analytics Through Serious Games: Data Mining Algorithms for Performance Measurement and Improvement Purposes

    Directory of Open Access Journals (Sweden)

    Abdelali Slimani

    2018-01-01

    Full Text Available learning analytics is an emerging discipline focused on the measurement, collection, analysis and reporting of learner interaction data through the E-learning contents. Serious game provides a potential source for relevant educational user data; it can propose an interactive environment for training and offer an effective learning process. This paper presents methods and approaches of educational data mining such as EM and K-Means to discuss the learning analytics through serious games, and then we provide an analysis of the player experience data collected from the educational game “ELISA” used to teach students of biology the immunological technique for determination of ANTI-HIV antibodies. Finally, we propose critically evaluation of our results including the limitations of our study and making suggestions for future research that links learning analytics and serious gaming.

  8. Measurement of the Jet Vertex Charge algorithm performance for identified $b$-jets in $t\\bar{t}$ events in $pp$ collisions with the ATLAS detector

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The Jet Vertex Charge algorithm, developed recently within the ATLAS collaboration, discriminates between jets resulting from the hadronisation of a bottom quark or bottom antiquark. This note describes a measurement of the performance of the algorithm and the extraction of data-to-simulation scale factors, made using $b$-tagged jets in candidate single lepton $t\\bar{t}$ events. The data sample was collected by the ATLAS detector at the LHC using $pp$ collisions at $\\sqrt{s}$ = 13 TeV in 2015 and 2016 and corresponds to a total integrated luminosity of 36.1 fb$^{-1}$ . Overall, good agreement is found between data and the simulation.

  9. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    International Nuclear Information System (INIS)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  10. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    Science.gov (United States)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the

  11. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  12. On the performance of pre-microRNA detection algorithms

    DEFF Research Database (Denmark)

    Saçar Demirci, Müşerref Duygu; Baumbach, Jan; Allmer, Jens

    2017-01-01

    assess 13 ab initio pre-miRNA detection approaches using all relevant, published, and novel data sets while judging algorithm performance based on ten intrinsic performance measures. We present an extensible framework, izMiR, which allows for the unbiased comparison of existing algorithms, adding new...

  13. Quantum learning algorithms for quantum measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bisio, Alessandro, E-mail: alessandro.bisio@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); D' Ariano, Giacomo Mauro, E-mail: dariano@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Perinotti, Paolo, E-mail: paolo.perinotti@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Sedlak, Michal, E-mail: michal.sedlak@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)

    2011-09-12

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  14. Quantum learning algorithms for quantum measurements

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Sedlak, Michal

    2011-01-01

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  15. Performance of Jet Algorithms in CMS

    CERN Document Server

    CMS Collaboration

    The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...

  16. Assessment of various supervised learning algorithms using different performance metrics

    Science.gov (United States)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  17. Flavour Tagging Algorithms and Performances in LHCb

    CERN Document Server

    Calvi, M; Musy, M

    2007-01-01

    In this note we describe the general characteristics of the LHCb flavour tagging algorithms and summarize the tagging performances on the Monte Carlo samples generated for the Data Challenge 2004 in different decay channels. We also discuss some systematics effects and possible methods to extract the mistag fraction in real data.

  18. High performance deformable image registration algorithms for manycore processors

    CERN Document Server

    Shackleford, James; Sharp, Gregory

    2013-01-01

    High Performance Deformable Image Registration Algorithms for Manycore Processors develops highly data-parallel image registration algorithms suitable for use on modern multi-core architectures, including graphics processing units (GPUs). Focusing on deformable registration, we show how to develop data-parallel versions of the registration algorithm suitable for execution on the GPU. Image registration is the process of aligning two or more images into a common coordinate frame and is a fundamental step to be able to compare or fuse data obtained from different sensor measurements. E

  19. THE HARPS-TERRA PROJECT. I. DESCRIPTION OF THE ALGORITHMS, PERFORMANCE, AND NEW MEASUREMENTS ON A FEW REMARKABLE STARS OBSERVED BY HARPS

    Energy Technology Data Exchange (ETDEWEB)

    Anglada-Escude, Guillem; Butler, R. Paul, E-mail: anglada@dtm.ciw.edu [Carnegie Institution of Washington, Department of Terrestrial Magnetism, 5241 Broad Branch Rd. NW, Washington, DC 20015 (United States)

    2012-06-01

    Doppler spectroscopy has uncovered or confirmed all the known planets orbiting nearby stars. Two main techniques are used to obtain precision Doppler measurements at optical wavelengths. The first approach is the gas cell method, which consists of least-squares matching of the spectrum of iodine imprinted on the spectrum of the star. The second method relies on the construction of a stabilized spectrograph externally calibrated in wavelength. The most precise stabilized spectrometer in operation is the High Accuracy Radial velocity Planet Searcher (HARPS), operated by the European Southern Observatory in La Silla Observatory, Chile. The Doppler measurements obtained with HARPS are typically obtained using the cross-correlation function (CCF) technique. This technique consists of multiplying the stellar spectrum by a weighted binary mask and finding the minimum of the product as a function of the Doppler shift. It is known that CCF is suboptimal in exploiting the Doppler information in the stellar spectrum. Here we describe an algorithm to obtain precision radial velocity measurements using least-squares matching of each observed spectrum to a high signal-to-noise ratio template derived from the same observations. This algorithm is implemented in our software HARPS-TERRA (Template-Enhanced Radial velocity Re-analysis Application). New radial velocity measurements on a representative sample of stars observed by HARPS are used to illustrate the benefits of the proposed method. We show that, compared with CCF, template matching provides a significant improvement in accuracy, especially when applied to M dwarfs.

  20. Vectorised Spreading Activation algorithm for centrality measurement

    Directory of Open Access Journals (Sweden)

    Alexander Troussov

    2011-01-01

    Full Text Available Spreading Activation is a family of graph-based algorithms widely used in areas such as information retrieval, epidemic models, and recommender systems. In this paper we introduce a novel Spreading Activation (SA method that we call Vectorised Spreading Activation (VSA. VSA algorithms, like “traditional” SA algorithms, iteratively propagate the activation from the initially activated set of nodes to the other nodes in a network through outward links. The level of the node’s activation could be used as a centrality measurement in accordance with dynamic model-based view of centrality that focuses on the outcomes for nodes in a network where something is flowing from node to node across the edges. Representing the activation by vectors allows the use of the information about various dimensionalities of the flow and the dynamic of the flow. In this capacity, VSA algorithms can model multitude of complex multidimensional network flows. We present the results of numerical simulations on small synthetic social networks and multi­dimensional network models of folksonomies which show that the results of VSA propagation are more sensitive to the positions of the initial seed and to the community structure of the network than the results produced by traditional SA algorithms. We tentatively conclude that the VSA methods could be instrumental to develop scalable and computationally efficient algorithms which could achieve synergy between computation of centrality indexes with detection of community structures in networks. Based on our preliminary results and on improvements made over previous studies, we foresee advances and applications in the current state of the art of this family of algorithms and their applications to centrality measurement.

  1. Performance of biometric quality measures.

    Science.gov (United States)

    Grother, Patrick; Tabassi, Elham

    2007-04-01

    We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.

  2. Developing Effective Performance Measures

    Science.gov (United States)

    2014-10-14

    University When Performance Measurement Goes Bad Laziness Vanity Narcissism Too Many Pettiness Inanity 52 Developing Effective...Kasunic, October 14, 2014 © 2014 Carnegie Mellon University Narcissism Measuring performance from the organization’s point of view, rather than from

  3. Performance Measurement und Environmental Performance Measurement

    OpenAIRE

    Sturm, Anke

    2000-01-01

    Die Zielsetzung der vorliegenden Dissertationsschrift besteht in der Entwicklung einer systematisierten Vorgehensweise, eines Controllingmodells, zur unternehmensinternen Umweltleistungsmessung. Das entwickelte Environmental Performance Measurement (EPM)-Modell umfaßt die fünf Stufen Festlegung der Ziele der Umweltleistungsmessung (1. Stufe), Erfassung der Umwelteinflüsse nach der ökologischen Erfolgsspaltung (2. Stufe), Bewertung der Umwelteinflüsse auf der Grundlage des qualitätszielbezogen...

  4. Jet algorithms performance in 13 TeV data

    CERN Document Server

    CMS Collaboration

    2017-01-01

    The performance of jet algorithms with data collected by the CMS detector at the LHC in 2015 with a center-of-mass energy of 13 TeV, corresponding to 2.3 fb$^{-1}$ of integrated luminosity, is reported. The criteria used to reject jets originating from detector noise are discussed and the efficiency and noise jet rejection rate are measured. A likelihood discriminant designed to differentiate jets initiated by light-quark partons from jets initiated from gluons is studied. A multivariate discriminator is built to distinguish jets initiated by a single high $p_{\\mathrm{T}}$ quark or gluon from jets originating from the overlap of multiple low $p_{\\mathrm{T}}$ particles from non-primary vertices (pileup jets). Algorithms used to identify large radius jets reconstructed from the decay products of highly Lorentz boosted W bosons and top quarks are discussed, and the efficiency and background rejection rates of these algorithms are measured.

  5. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  6. A New Filtering Algorithm Utilizing Radial Velocity Measurement

    Institute of Scientific and Technical Information of China (English)

    LIU Yan-feng; DU Zi-cheng; PAN Quan

    2005-01-01

    Pulse Doppler radar measurements consist of range, azimuth, elevation and radial velocity. Most of the radar tracking algorithms in engineering only utilize position measurement. The extended Kalman filter with radial velocity measureneut is presented, then a new filtering algorithm utilizing radial velocity measurement is proposed to improve tracking results and the theoretical analysis is also given. Simulation results of the new algorithm, converted measurement Kalman filter, extended Kalman filter are compared. The effectiveness of the new algorithm is verified by simulation results.

  7. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  8. Performance Measurement at Universities

    DEFF Research Database (Denmark)

    Lueg, Klarissa

    2014-01-01

    This paper proposes empirical approaches to testing the reliability, validity, and organizational effectiveness of student evaluations of teaching (SET) as a performance measurement instrument in knowledge management at the institutional level of universities. Departing from Weber’s concept...

  9. Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm

    Science.gov (United States)

    Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.

    2016-05-01

    In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.

  10. The performance measurement manifesto.

    Science.gov (United States)

    Eccles, R G

    1991-01-01

    The leading indicators of business performance cannot be found in financial data alone. Quality, customer satisfaction, innovation, market share--metrics like these often reflect a company's economic condition and growth prospects better than its reported earnings do. Depending on an accounting department to reveal a company's future will leave it hopelessly mired in the past. More and more managers are changing their company's performance measurement systems to track nonfinancial measures and reinforce new competitive strategies. Five activities are essential: developing an information architecture; putting the technology in place to support this architecture; aligning bonuses and other incentives with the new system; drawing on outside resources; and designing an internal process to ensure the other four activities occur. New technologies and more sophisticated databases have made the change to nonfinancial performance measurement systems possible and economically feasible. Industry and trade associations, consulting firms, and public accounting firms that already have well-developed methods for assessing market share and other performance metrics can add to the revolution's momentum--as well as profit from the business opportunities it presents. Every company will have its own key measures and distinctive process for implementing the change. But making it happen will always require careful preparation, perseverance, and the conviction of the CEO that it must be carried through. When one leading company can demonstrate the long-term advantage of its superior performance on quality or innovation or any other nonfinancial measure, it will change the rules for all its rivals forever.

  11. Enterprise performance measurement systems

    Directory of Open Access Journals (Sweden)

    Milija Bogavac

    2014-10-01

    Full Text Available Performance measurement systems are an extremely important part of the control and management actions, because in this way a company can determine its business potential, its market power, potential and current level of business efficiency. The significance of measurement consists in influencing the relationship between the results of reproduction (total volume of production, value of production, total revenue and profit and investments to achieve these results (factors of production spending and hiring capital in order to achieve the highest possible quality of the economy. (The relationship between the results of reproduction and investment to achieve them quantitatively determines economic success as the quality of the economy. Measuring performance allows the identification of the economic resources the company has, so looking at the key factors that affect its performance can help to determine the appropriate course of action.

  12. Queue and stack sorting algorithm optimization and performance analysis

    Science.gov (United States)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  13. Using internal evaluation measures to validate the quality of diverse stream clustering algorithms

    NARCIS (Netherlands)

    Hassani, M.; Seidl, T.

    2017-01-01

    Measuring the quality of a clustering algorithm has shown to be as important as the algorithm itself. It is a crucial part of choosing the clustering algorithm that performs best for an input data. Streaming input data have many features that make them much more challenging than static ones. They

  14. Productivity and Performance Measurement

    DEFF Research Database (Denmark)

    Hald, Kim Sundtoft; Spring, Martin

    This study explores conceptually how performance measurement as discussed in the literature, enables or constrains the ability to manage and improve productivity. It uses an inter-disciplinary literature review to identify five areas of concern relating productivity accounting to the ability...... to improve productivity: “Productivity representation”; “productivity incentives”, “productivity intervention”; “productivity trade-off or synergy” and “productivity strategy and context”. The paper discusses these areas of concern and expands our knowledge of how productivity and performance measurement...

  15. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  16. Measuring Firm Performance

    DEFF Research Database (Denmark)

    Assaf, A. George; Josiassen, Alexander; Gillen, David

    2014-01-01

    Set in the airport industry, this paper measures firm performance using both desirable and bad outputs (i.e. airport delays). We first estimate a model that does not include the bad outputs and then a model that includes bad outputs. The results show important differences in the efficiency...

  17. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. Performance of the "CCS Algorithm" in real world patients.

    Science.gov (United States)

    LaHaye, Stephen A; Olesen, Jonas B; Lacombe, Shawn P

    2015-06-01

    With the publication of the 2014 Focused Update of the Canadian Cardiovascular Society Guidelines for the Management of Atrial Fibrillation, the Canadian Cardiovascular Society Atrial Fibrillation Guidelines Committee has introduced a new triage and management algorithm; the so-called "CCS Algorithm". The CCS Algorithm is based upon expert opinion of the best available evidence; however, the CCS Algorithm has not yet been validated. Accordingly, the purpose of this study is to evaluate the performance of the CCS Algorithm in a cohort of real world patients. We compared the CCS Algorithm with the European Society of Cardiology (ESC) Algorithm in 172 hospital inpatients who are at risk of stroke due to non-valvular atrial fibrillation in whom anticoagulant therapy was being considered. The CCS Algorithm and the ESC Algorithm were concordant in 170/172 patients (99% of the time). There were two patients (1%) with vascular disease, but no other thromboembolic risk factors, which were classified as requiring oral anticoagulant therapy using the ESC Algorithm, but for whom ASA was recommended by the CCS Algorithm. The CCS Algorithm appears to be unnecessarily complicated in so far as it does not appear to provide any additional discriminatory value above and beyond the use of the ESC Algorithm, and its use could result in under treatment of patients, specifically female patients with vascular disease, whose real risk of stroke has been understated by the Guidelines.

  19. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  20. Generic algorithms for high performance scalable geocomputing

    Science.gov (United States)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  1. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  2. Density-independent algorithm for sensing moisture content of sawdust based on reflection measurements

    Science.gov (United States)

    A density-independent algorithm for moisture content determination in sawdust, based on a one-port reflection measurement technique is proposed for the first time. Performance of this algorithm is demonstrated through measurement of the dielectric properties of sawdust with an open-ended haft-mode s...

  3. High Performance Parallel Multigrid Algorithms for Unstructured Grids

    Science.gov (United States)

    Frederickson, Paul O.

    1996-01-01

    We describe a high performance parallel multigrid algorithm for a rather general class of unstructured grid problems in two and three dimensions. The algorithm PUMG, for parallel unstructured multigrid, is related in structure to the parallel multigrid algorithm PSMG introduced by McBryan and Frederickson, for they both obtain a higher convergence rate through the use of multiple coarse grids. Another reason for the high convergence rate of PUMG is its smoother, an approximate inverse developed by Baumgardner and Frederickson.

  4. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  5. Predicting Students’ Performance using Modified ID3 Algorithm

    OpenAIRE

    Ramanathan L; Saksham Dhanda; Suresh Kumar D

    2013-01-01

    The ability to predict performance of students is very crucial in our present education system. We can use data mining concepts for this purpose. ID3 algorithm is one of the famous algorithms present today to generate decision trees. But this algorithm has a shortcoming that it is inclined to attributes with many values. So , this research aims to overcome this shortcoming of the algorithm by using gain ratio(instead of information gain) as well as by giving weights to each attribute at every...

  6. Using Technical Performance Measures

    Science.gov (United States)

    Garrett, Christopher J.; Levack, Daniel J. H.; Rhodes, Russel E.

    2011-01-01

    All programs have requirements. For these requirements to be met, there must be a means of measurement. A Technical Performance Measure (TPM) is defined to produce a measured quantity that can be compared to the requirement. In practice, the TPM is often expressed as a maximum or minimum and a goal. Example TPMs for a rocket program are: vacuum or sea level specific impulse (lsp), weight, reliability (often expressed as a failure rate), schedule, operability (turn-around time), design and development cost, production cost, and operating cost. Program status is evaluated by comparing the TPMs against specified values of the requirements. During the program many design decisions are made and most of them affect some or all of the TPMs. Often, the same design decision changes some TPMs favorably while affecting other TPMs unfavorably. The problem then becomes how to compare the effects of a design decision on different TPMs. How much failure rate is one second of specific impulse worth? How many days of schedule is one pound of weight worth? In other words, how to compare dissimilar quantities in order to trade and manage the TPMs to meet all requirements. One method that has been used successfully and has a mathematical basis is Utility Analysis. Utility Analysis enables quantitative comparison among dissimilar attributes. It uses a mathematical model that maps decision maker preferences over the tradeable range of each attribute. It is capable of modeling both independent and dependent attributes. Utility Analysis is well supported in the literature on Decision Theory. It has been used at Pratt & Whitney Rocketdyne for internal programs and for contracted work such as the J-2X rocket engine program. This paper describes the construction of TPMs and describes Utility Analysis. It then discusses the use of TPMs in design trades and to manage margin during a program using Utility Analysis.

  7. Strategic Measures of Teacher Performance

    Science.gov (United States)

    Milanowski, Anthony

    2011-01-01

    Managing the human capital in education requires measuring teacher performance. To measure performance, administrators need to combine measures of practice with measures of outcomes, such as value-added measures, and three measurement systems are needed: classroom observations, performance assessments or work samples, and classroom walkthroughs.…

  8. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  9. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  10. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  11. Improved collaborative filtering recommendation algorithm of similarity measure

    Science.gov (United States)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  12. Convergence of Algorithms for Reconstructing Convex Bodies and Directional Measures

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus; Milanfar, Peyman

    2006-01-01

    We investigate algorithms for reconstructing a convex body K in Rn from noisy measurements of its support function or its brightness function in k directions u1, . . . , uk. The key idea of these algorithms is to construct a convex polytope Pk whose support function (or brightness function) best...

  13. Freight performance measures : approach analysis.

    Science.gov (United States)

    2010-05-01

    This report reviews the existing state of the art and also the state of the practice of freight performance measurement. Most performance measures at the state level have aimed at evaluating highway or transit infrastructure performance with an empha...

  14. Winter maintenance performance measure.

    Science.gov (United States)

    2016-01-01

    The Winter Performance Index is a method of quantifying winter storm events and the DOTs response to them. : It is a valuable tool for evaluating the States maintenance practices, performing post-storm analysis, training : maintenance personnel...

  15. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  16. Improved Ant Colony Clustering Algorithm and Its Performance Study

    Science.gov (United States)

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  17. Convergence Performance of Adaptive Algorithms of L-Filters

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2003-01-01

    Full Text Available This paper deals with convergence parameters determination of adaptive algorithms, which are used in adaptive L-filters design. Firstly the stability of adaptation process, convergence rate or adaptation time, and behaviour of convergence curve belong among basic properties of adaptive algorithms. L-filters with variety of adaptive algorithms were used to their determination. Convergence performances finding of adaptive filters is important mainly for their hardware applications, where filtration in real time or adaptation of coefficient filter with low capacity of input data are required.

  18. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  19. Evaluation of Activity Recognition Algorithms for Employee Performance Monitoring

    OpenAIRE

    Mehreen Mumtaz; Hafiz Adnan Habib

    2012-01-01

    Successful Human Resource Management plays a key role in success of any organization. Traditionally, human resource managers rely on various information technology solutions such as Payroll and Work Time Systems incorporating RFID and biometric technologies. This research evaluates activity recognition algorithms for employee performance monitoring. An activity recognition algorithm has been implemented that categorized the activity of employee into following in to classes: job activities and...

  20. Performance Measurement Baseline Change Request

    Data.gov (United States)

    Social Security Administration — The Performance Measurement Baseline Change Request template is used to document changes to scope, cost, schedule, or operational performance metrics for SSA's Major...

  1. Analysis of ANSI N13.11: the performance algorithm

    International Nuclear Information System (INIS)

    Roberson, P.L.; Hadley, R.T.; Thorson, M.R.

    1982-06-01

    The method of performance testing for personnel dosimeters specified in draft ANSI N13.11, Criteria for Testing Personnel Dosimetry Performance is evaluated. Points addressed are: (1) operational behavior of the performance algorithm; (2) dependence on the number of test dosimeters; (3) basis for choosing an algorithm; and (4) other possible algorithms. The performance algorithm evaluated for each test category is formed by adding the calibration bias and its standard deviation. This algorithm is not optimal due to a high dependence on the standard deviation. The dependence of the calibration bias on the standard deviation is significant because of the low number of dosimeters (15) evaluated per category. For categories with large standard deviations the uncertainty in determining the performance criterion is large. To have a reasonable chance of passing all categories in one test, we required a 95% probability of passing each category. Then, the maximum permissible standard deviation is 30% even with a zero bias. For test categories with standard deviations <10%, the bias can be as high as 35%. For intermediate standard deviations, the chance of passing a category is improved by using a 5 to 10% negative bias. Most multipurpose personnel dosimetry systems will probably require detailed calibration adjustments to pass all categories within two rounds of testing

  2. Performance analysis of manufacturing systems : queueing approximations and algorithms

    NARCIS (Netherlands)

    Vuuren, van M.

    2007-01-01

    Performance Analysis of Manufacturing Systems Queueing Approximations and Algorithms This thesis is concerned with the performance analysis of manufacturing systems. Manufacturing is the application of tools and a processing medium to the transformation of raw materials into finished goods for sale.

  3. A high accuracy algorithm of displacement measurement for a micro-positioning stage

    Directory of Open Access Journals (Sweden)

    Xiang Zhang

    2017-05-01

    Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.

  4. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    Science.gov (United States)

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  5. Transit performance measures in California.

    Science.gov (United States)

    2016-04-01

    This research is the result of a California Department of Transportation (Caltrans) request to assess the most commonly : available transit performance measures in California. Caltrans wanted to understand performance measures and data used by : Metr...

  6. A Feedback Optimal Control Algorithm with Optimal Measurement Time Points

    Directory of Open Access Journals (Sweden)

    Felix Jost

    2017-02-01

    Full Text Available Nonlinear model predictive control has been established as a powerful methodology to provide feedback for dynamic processes over the last decades. In practice it is usually combined with parameter and state estimation techniques, which allows to cope with uncertainty on many levels. To reduce the uncertainty it has also been suggested to include optimal experimental design into the sequential process of estimation and control calculation. Most of the focus so far was on dual control approaches, i.e., on using the controls to simultaneously excite the system dynamics (learning as well as minimizing a given objective (performing. We propose a new algorithm, which sequentially solves robust optimal control, optimal experimental design, state and parameter estimation problems. Thus, we decouple the control and the experimental design problems. This has the advantages that we can analyze the impact of measurement timing (sampling independently, and is practically relevant for applications with either an ethical limitation on system excitation (e.g., chemotherapy treatment or the need for fast feedback. The algorithm shows promising results with a 36% reduction of parameter uncertainties for the Lotka-Volterra fishing benchmark example.

  7. Dentate Gyrus circuitry features improve performance of sparse approximation algorithms.

    Directory of Open Access Journals (Sweden)

    Panagiotis C Petrantonakis

    Full Text Available Memory-related activity in the Dentate Gyrus (DG is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2-4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm "Iterative Soft Thresholding" (IST by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG.

  8. Performance Analysis of Binary Search Algorithm in RFID

    Directory of Open Access Journals (Sweden)

    Xiangmei SONG

    2014-12-01

    Full Text Available Binary search algorithm (BS is a kind of important anti-collision algorithm in the Radio Frequency Identification (RFID, is also one of the key technologies which determine whether the information in the tag is identified by the reader-writer fast and reliably. The performance of BS directly affects the quality of service in Internet of Things. This paper adopts an automated formal technology: probabilistic model checking to analyze the performance of BS algorithm formally. Firstly, according to the working principle of BS algorithm, its dynamic behavior is abstracted into a Discrete Time Markov Chains which can describe deterministic, discrete time and the probability selection. And then on the model we calculate the probability of the data sent successfully and the expected time of tags completing the data transmission. Compared to the another typical anti-collision protocol S-ALOHA in RFID, experimental results show that with an increase in the number of tags the BS algorithm has a less space and time consumption, the average number of conflicts increases slower than the S-ALOHA protocol standard, BS algorithm needs fewer expected time to complete the data transmission, and the average speed of the data transmission in BS is as 1.6 times as the S-ALOHA protocol.

  9. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  10. Measuring the Company Performance

    Directory of Open Access Journals (Sweden)

    Ion Stancu

    2006-03-01

    Full Text Available According to the logics of the efficient capital investment, the management of the investment of the saving capital in the company’s assets must conclude, on the end of the financial year, with a plus ofreal value (NPV > 0. From this point of view, in this paper we suggest the usage of an investment valuationmodel for the assessment of the company managerial and technological performance. Supposing the book value is a proxy of the just value (of assets and operational results and supposing the capital cost iscorrectly estimated, we evaluate the company’s performance both by the net present value model, and also by the company’s ability to create a surplus of the invested capital (NPV >0.Our paper also aims to identify the performance of the financial breakeven point (for which NPV is at least equal to zero as the minimum acceptable level for the company’s activity. Under this critical sales point, the company goes through the undervaluation of shareholders fortune even if the company’s sales are greater than accounting breakeven point. The performance’s activity level is one which the managers recover and surpass the cost of capital, cost which stand for the normal activity benchmark.The risks of applying of our suggested model we support go down to the confidence of accounting data and of the cost of capital estimating. In spite all of this, the usage of a sensitivity analysis to search anaverage NPV would leads to the company’s performance valuation within investment logic with a high information power.

  11. Measuring the Company Performance

    Directory of Open Access Journals (Sweden)

    Ion Stancu

    2006-01-01

    Full Text Available According to the logics of the efficient capital investment, the management of the investment of the saving capital in the company’s assets must conclude, on the end of the financial year, with a plus of real value (NPV > 0. From this point of view, in this paper we suggest the usage of an investment valuation model for the assessment of the company managerial and technological performance. Supposing the book value is a proxy of the just value (of assets and operational results and supposing the capital cost is correctly estimated, we evaluate the company’s performance both by the net present value model, and also by the company’s ability to create a surplus of the invested capital (NPV >0. Our paper also aims to identify the performance of the financial breakeven point (for which NPV is at least equal to zero as the minimum acceptable level for the company’s activity. Under this critical sales point, the company goes through the undervaluation of shareholders fortune even if the company’s sales are greater than accounting breakeven point. The performance’s activity level is one which the managers recover and surpass the cost of capital, cost which stand for the normal activity benchmark. The risks of applying of our suggested model we support go down to the confidence of accounting data and of the cost of capital estimating. In spite all of this, the usage of a sensitivity analysis to search an average NPV would leads to the company’s performance valuation within investment logic with a high information power.

  12. Massively parallel performance of neutron transport response matrix algorithms

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1993-01-01

    Massively parallel red/black response matrix algorithms for the solution of within-group neutron transport problems are implemented on the Connection Machines-2, 200 and 5. The response matrices are dericed from the diamond-differences and linear-linear nodal discrete ordinate and variational nodal P 3 approximations. The unaccelerated performance of the iterative procedure is examined relative to the maximum rated performances of the machines. The effects of processor partitions size, of virtual processor ratio and of problems size are examined in detail. For the red/black algorithm, the ratio of inter-node communication to computing times is found to be quite small, normally of the order of ten percent or less. Performance increases with problems size and with virtual processor ratio, within the memeory per physical processor limitation. Algorithm adaptation to courser grain machines is straight-forward, with total computing time being virtually inversely proportional to the number of physical processors. (orig.)

  13. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  14. Segmentation of Mushroom and Cap width Measurement using Modified K-Means Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Eser Sert

    2014-01-01

    Full Text Available Mushroom is one of the commonly consumed foods. Image processing is one of the effective way for examination of visual features and detecting the size of a mushroom. We developed software for segmentation of a mushroom in a picture and also to measure the cap width of the mushroom. K-Means clustering method is used for the process. K-Means is one of the most successful clustering methods. In our study we customized the algorithm to get the best result and tested the algorithm. In the system, at first mushroom picture is filtered, histograms are balanced and after that segmentation is performed. Results provided that customized algorithm performed better segmentation than classical K-Means algorithm. Tests performed on the designed software showed that segmentation on complex background pictures is performed with high accuracy, and 20 mushrooms caps are measured with 2.281 % relative error.

  15. Economic measures of performance

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Cogeneration systems can reduce the total cost of utility service, and, in some instances where power is sold to an electric utility, can even produce a positive net revenue stream. This is, the total cogeneration revenue is greater than the cogeneration system's operating cost plus the cost of supplemental fuel and power. Whether it is sited at an existing facility or new construction, cogeneration systems do require an incremental investment over and above that which would be required if the end user were to utilize more conventional utility services. While the decision as to whether or not one should invest in cogeneration may consider such intangibles as predictability of future utility costs, reliability of electrical supply and the quality of that supply, the decision ultimately becomes one of basic economics. This chapter briefly reviews several economic measures with regard to ease of use, accuracy and financial objective

  16. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  17. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  18. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  19. Performance of genetic algorithms in search for water splitting perovskites

    DEFF Research Database (Denmark)

    Jain, A.; Castelli, Ivano Eligio; Hautier, G.

    2013-01-01

    We examine the performance of genetic algorithms (GAs) in uncovering solar water light splitters over a space of almost 19,000 perovskite materials. The entire search space was previously calculated using density functional theory to determine solutions that fulfill constraints on stability, band...

  20. THE MEASURABILITY OF CONTROLLING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    V. Laval

    2017-04-01

    Full Text Available The urge to increase the performance of company processes is ongoing. Surveys indicate however, that many companies do not measure the controlling performance with a defined set of key performance indicators. This paper will analyze three categories of controlling key performance indicators based on their degree of measurability and their impact on the financial performance of a company. Potential measures to optimize the performance of the controlling department will be outlined and put in a logical order. The aligning of the controlling activity with the respective management expectation will be discussed as a key success factor of this improvement project.

  1. A Novel Attitude Measurement Algorithm in Magnetic Interference Environment

    Directory of Open Access Journals (Sweden)

    Lingxia Li

    2014-07-01

    Full Text Available The approach of using Magnetic Angular Rate Gravity (MARG sensor for the current multi-sensor based pedestrian navigation algorithm magnetometers is susceptible to the external magnetic interference. The result of attitude is affected by many factors, like the low-precision MEMS gyro drift and large body linear acceleration measurements. In this paper, we propose anti-jamming algorithm which is based on four elements of Extended Kalman Filtering (EKF. To reduce carrier linear acceleration and local magnetic field that impact on attitude measurement, the adaptive covariance matrix structure is considered. Moreover, the heading angle correction threshold method is used in magnetic field compensation and interference environment. Based on the experimental results, the effectiveness of the proposed algorithm suppresses the influence of the external magnetic interference on heading angle, as well as improving the accuracy of system attitude measurement.

  2. New Algorithm for Evaluating the Green Supply Chain Performance in an Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Pan Liu

    2016-09-01

    Full Text Available An effective green supply chain (GSC can help an enterprise obtain more benefits and reduce costs. Therefore, developing an effective evaluation method for GSC performance evaluation is becoming increasingly important. In this study, the advantages and disadvantages of the current performance evaluations and algorithms for GSC performance evaluations were discussed and evaluated. Based on these findings, an improved five-dimensional balanced scorecard was proposed in which the green performance indicators were revised to facilitate their measurement. A model based on Rough Set theory, the Genetic Algorithm, and the Levenberg Marquardt Back Propagation (LMBP neural network algorithm was proposed. Next, using Matlab, the Rosetta tool, and the practical data of company F, a case study was conducted. The results indicate that the proposed model has a high convergence speed and an accurate prediction ability. The credibility and effectiveness of the proposed model was validated. In comparison with the normal Back Propagation neural network algorithm and the LMBP neural network algorithm, the proposed model has greater credibility and effectiveness. In practice, this method provides a more suitable indicator system and algorithm for enterprises to be able to implement GSC performance evaluations in an uncertain environment. Academically, the proposed method addresses the lack of a theoretical basis for GSC performance evaluation, thus representing a new development in GSC performance evaluation theory.

  3. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  4. Timing measurements of some tracking algorithms and suitability of FPGA's to improve the execution speed

    CERN Document Server

    Khomich, A; Kugel, A; Männer, R; Müller, M; Baines, J T M

    2003-01-01

    Some of track reconstruction algorithms which are common to all B-physics channels and standard RoI processing have been tested for execution time and assessed for suitability for speed-up by using FPGA coprocessor. The studies presented in this note were performed in the C/C++ framework, CTrig, which was the fullest set of algorithms available at the time of study For investigation of possible speed-up of algorithms most time consuming parts of TRT-LUT was implemented in VHDL for running in FPGA coprocessor board MPRACE. MPRACE (Reconfigurable Accelerator / Computing Engine) is an FPGA-Coprocessor based on Xilinx Virtex-2 FPGA and made as 64Bit/66MHz PCI card developed at the University of Mannheim. Timing measurements results for a TRT Full Scan algorithm executed on the MPRACE are presented here as well. The measurement results show a speed-up factor of ~2 for this algorithm.

  5. Measuring and improving infrastructure performance

    National Research Council Canada - National Science Library

    Committee on Measuring and Improving Infrastructure Performance, National Research Council

    .... Developing a framework for guiding attempts at measuring the performance of infrastructure systems and grappling with the concept of defining good performance are the major themes of this book...

  6. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    Science.gov (United States)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  7. Performance of the ALICE secondary vertex b-tagging algorithm

    CERN Document Server

    INSPIRE-00262232

    2016-11-04

    The identification of jets originating from beauty quarks in heavy-ion collisions is important to study the properties of the hot and dense matter produced in such collisions. A variety of algorithms for b-jet tagging was elaborated at the LHC experiments. They rely on the properties of B hadrons, i.e. their long lifetime, large mass and large multiplicity of decay products. In this work, the b-tagging algorithm based on displaced secondary-vertex topologies is described. We present Monte Carlo based performance studies of the algorithm for charged jets reconstructed with the ALICE tracking system in p-Pb collisions at $\\sqrt{s_\\text{NN}}$ = 5.02 TeV. The tagging efficiency, rejection rate and the correction of the smearing effects of non-ideal detector response are presented.

  8. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    OpenAIRE

    Jonathan Bruce Shepherd; Tomohito Wada; David Rowlands; Daniel Arthur James

    2016-01-01

    With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monito...

  9. A RECURSIVE ALGORITHM SUITABLE FOR REAL-TIME MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Giovanni Bucci

    1995-12-01

    Full Text Available This paper deals with a recursive algorithm suitable for realtime measurement applications, based on an indirect technique, useful in those applications where the required quantities cannot be measured in a straightforward way. To cope with time constraints a parallel formulation of it, suitable to be implemented on multiprocessor systems, is presented. The adopted concurrent implementation is based on factorization techniques. Some experimental results related to the application of the system for carrying out measurements on synchronous motors are included.

  10. The Politics of Performance Measurement

    DEFF Research Database (Denmark)

    Bjørnholt, Bente; Larsen, Flemming

    2014-01-01

    Performance measurements are meant to improve public decision making and organizational performance. But performance measurements are far from always rational tools for problem solving, they are also political instruments. The central question addressed in this article is how performance...... impact on the political decision making process, as the focus on performance goals entails a kind of reductionism (complex problems are simplified), sequential decision making processes (with a division in separate policy issues) and short-sighted decisions (based on the need for making operational goals)....... measurement affects public policy. The aim is to conceptualize the political consequences of performance measurements and of special concern is how performance systems influence how political decisions are made, what kind of political decisions are conceivable, and how they are implemented. The literature...

  11. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    Directory of Open Access Journals (Sweden)

    Jonathan Bruce Shepherd

    2016-08-01

    Full Text Available With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294.

  12. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    using a variety of standard images and its performance has been compared against several de-noising algorithms known from the prior art. Experimental results show that the proposed algorithm preserves the edges better and in most cases, improves the measured visual quality of the denoised images......Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges...... in comparison to the existing methods known from the literature. The improvement is obtained without excessive computational cost, and the algorithm works well on a wide range of different types of noise....

  13. Quality measures for HRR alignment based ISAR imaging algorithms

    CSIR Research Space (South Africa)

    Janse van Rensburg, V

    2013-05-01

    Full Text Available Some Inverse Synthetic Aperture Radar (ISAR) algorithms form the image in a two-step process of range alignment and phase conjugation. This paper discusses a comprehensive set of measures used to quantify the quality of range alignment, with the aim...

  14. Diagnostic colonoscopy: performance measurement study.

    Science.gov (United States)

    Kuznets, Naomi

    2002-07-01

    This is the fifth of a series of best practices studies undertaken by the Performance Measurement Initiative (PMI), the centerpiece of the Institute for Quality Improvement (IQI), a not-for-profit quality improvement subsidiary of the Accreditation Association for Ambulatory Health Care (AAAHC) (Performance Measurement Initiative, 1999a, 1999b, 2000a, 2000b). The IQI was created to offer clinical performance measurement and improvement opportunities to ambulatory health care organizations and others interested in quality patient care. The purpose of the study was to provide opportunities to initiate clinical performance measurement on key processes and outcomes for this procedure and use this information for clinical quality improvement. This article provides performance measurement information on how organizations that have demonstrated and validated differences in clinical practice can have similar outcomes, but at a dramatically lower cost. The intent of the article is to provide organizations with alternatives in practice to provide a better value to their patients.

  15. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  16. Filtering algorithm for radial displacement measurements of a dented pipe

    International Nuclear Information System (INIS)

    Hojjati, M.H.; Lukasiewicz, S.A.

    2008-01-01

    Experimental measurements are always affected by some noise and errors caused by inherent inaccuracies and deficiencies of the experimental techniques and measuring devices used. In some fields, such as strain calculations in a dented pipe, the results are very sensitive to the errors. This paper presents a filtering algorithm to remove noise and errors from experimental measurements of radial displacements of a dented pipe. The proposed filter eliminates the errors without harming the measured data. The filtered data can then be used to estimate membrane and bending strains. The method is very effective and easy to use and provides a helpful practical measure for inspection purposes

  17. Algorithmic randomness, physical entropy, measurements, and the second law

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic information content is equal to the size -- in the number of bits -- of the shortest program for a universal Turing machine which can reproduce a state of a physical system. In contrast to the statistical Boltzmann-Gibbs-Shannon entropy, which measures ignorance, the algorithmic information content is a measure of the available information. It is defined without a recourse to probabilities and can be regarded as a measure of randomness of a definite microstate. I suggest that the physical entropy S -- that is, the quantity which determines the amount of the work ΔW which can be extracted in the cyclic isothermal expansion process through the equation ΔW = k B TΔS -- is a sum of two contributions: the mission information measured by the usual statistical entropy and the known randomness measured by the algorithmic information content. The sum of these two contributions is a ''constant of motion'' in the process of a dissipation less measurement on an equilibrium ensemble. This conservation under a measurement, which can be traced back to the noiseless coding theorem of Shannon, is necessary to rule out existence of a successful Maxwell's demon. 17 refs., 3 figs

  18. Performance study of LMS based adaptive algorithms for unknown system identification

    International Nuclear Information System (INIS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-01-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment

  19. Performance study of LMS based adaptive algorithms for unknown system identification

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Shazia; Ahmad, Noor Atinah [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Penang (Malaysia)

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  20. Performance of the ATLAS primary vertex reconstruction algorithms

    CERN Document Server

    Zhang, Matt

    2017-01-01

    The reconstruction of primary vertices in the busy, high pile up environment of the LHC is a challenging task. The challenges and novel methods developed by the ATLAS experiment to reconstruct vertices in such environments will be presented. Such advances in vertex seeding include methods taken from medical imagining, which allow for reconstruction of very nearby vertices will be highlighted. The performance of the current vertexing algorithms using early Run-2 data will be presented and compared to results from simulation.

  1. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  2. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  3. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-29

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability on many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with

  4. Facilities projects performance measurement system

    International Nuclear Information System (INIS)

    Erben, J.F.

    1979-01-01

    The two DOE-owned facilities at Hanford, the Fuels and Materials Examination Facility (FMEF), and the Fusion Materials Irradiation Test Facility (FMIT), are described. The performance measurement systems used at these two facilities are next described

  5. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  6. Performance Measures, Benchmarking and Value.

    Science.gov (United States)

    McGregor, Felicity

    This paper discusses performance measurement in university libraries, based on examples from the University of Wollongong (UoW) in Australia. The introduction highlights the integration of information literacy into the curriculum and the outcomes of a 1998 UoW student satisfaction survey. The first section considers performance indicators in…

  7. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  8. Performance evaluation of PCA-based spike sorting algorithms.

    Science.gov (United States)

    Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George

    2008-09-01

    Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts.

  9. Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.

    Science.gov (United States)

    Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu

    2017-01-01

    The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this article, we reveal that the (1+1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime [Formula: see text], where [Formula: see text], [Formula: see text], and [Formula: see text] are, respectively, the number of Steiner nodes, the number of special nodes, and the largest weight among all edges in the input graph. We also show that the (1+1) EA is better than two other heuristics on two GSTP instances, and the (1+1) EA may be inefficient on a constructed GSTP instance.

  10. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  11. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA Wormhole Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Jonny Karlsson

    2013-05-01

    Full Text Available Traversal time and hop count analysis (TTHCA is a recent wormhole detection algorithm for mobile ad hoc networks (MANET which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.

  12. Failure and Redemption of Multifilter Rotating Shadowband Radiometer (MFRSR/Normal Incidence Multifilter Radiometer (NIMFR Cloud Screening: Contrasting Algorithm Performance at Atmospheric Radiation Measurement (ARM North Slope of Alaska (NSA and Southern Great Plains (SGP Sites

    Directory of Open Access Journals (Sweden)

    James Barnard

    2013-09-01

    Full Text Available Well-known cloud-screening algorithms, which are designed to remove cloud-contaminated aerosol optical depths (AOD from Multifilter Rotating Shadowband Radiometer (MFRSR and Normal Incidence Multifilter Radiometer (NIMFR measurements, have exhibited excellent performance at many middle-to-low latitude sites around world. However, they may occasionally fail under challenging observational conditions, such as when the sun is low (near the horizon and when optically thin clouds with small spatial inhomogeneity occur. Such conditions have been observed quite frequently at the high-latitude Atmospheric Radiation Measurement (ARM North Slope of Alaska (NSA sites. A slightly modified cloud-screening version of the standard algorithm is proposed here with a focus on the ARM-supported MFRSR and NIMFR data. The modified version uses approximately the same techniques as the standard algorithm, but it additionally examines the magnitude of the slant-path line of sight transmittance and eliminates points when the observed magnitude is below a specified threshold. Substantial improvement of the multi-year (1999–2012 aerosol product (AOD and its Angstrom exponent is shown for the NSA sites when the modified version is applied. Moreover, this version reproduces the AOD product at the ARM Southern Great Plains (SGP site, which was originally generated by the standard cloud-screening algorithms. The proposed minor modification is easy to implement and its application to existing and future cloud-screening algorithms can be particularly beneficial for challenging observational conditions.

  13. ASSESSMENT OF PERFORMANCES OF VARIOUS MACHINE LEARNING ALGORITHMS DURING AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-07-01

    Full Text Available Automation of descriptive answers evaluation is the need of the hour because of the huge increase in the number of students enrolling each year in educational institutions and the limited staff available to spare their time for evaluations. In this paper, we use a machine learning workbench called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. We attempted to identify the best supervised machine learning algorithm given a limited training set sample size scenario. We evaluated performances of Bayes, SVM, Logistic Regression, Random forests, Decision stump and Decision trees algorithms. We confirmed SVM as best performing algorithm based on quantitative measurements across accuracy, kappa, training speed and prediction accuracy with supplied test set.

  14. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  15. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  16. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2013-01-01

    Full Text Available Teaching-Learning-based optimization (TLBO is a recently proposed population based algorithm, which simulates the teaching-learning process of the class room. This algorithm requires only the common control parameters and does not require any algorithm-specific control parameters. In this paper, the effect of elitism on the performance of the TLBO algorithm is investigated while solving unconstrained benchmark problems. The effects of common control parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 76 unconstrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. A statistical test is also performed to investigate the results obtained using different algorithms. The results have proved the effectiveness of the proposed elitist TLBO algorithm.

  17. Inversion algorithms for large-scale geophysical electromagnetic measurements

    International Nuclear Information System (INIS)

    Abubakar, A; Habashy, T M; Li, M; Liu, J

    2009-01-01

    Low-frequency surface electromagnetic prospecting methods have been gaining a lot of interest because of their capabilities to directly detect hydrocarbon reservoirs and to compliment seismic measurements for geophysical exploration applications. There are two types of surface electromagnetic surveys. The first is an active measurement where we use an electric dipole source towed by a ship over an array of seafloor receivers. This measurement is called the controlled-source electromagnetic (CSEM) method. The second is the Magnetotelluric (MT) method driven by natural sources. This passive measurement also uses an array of seafloor receivers. Both surface electromagnetic methods measure electric and magnetic field vectors. In order to extract maximal information from these CSEM and MT data we employ a nonlinear inversion approach in their interpretation. We present two types of inversion approaches. The first approach is the so-called pixel-based inversion (PBI) algorithm. In this approach the investigation domain is subdivided into pixels, and by using an optimization process the conductivity distribution inside the domain is reconstructed. The optimization process uses the Gauss–Newton minimization scheme augmented with various forms of regularization. To automate the algorithm, the regularization term is incorporated using a multiplicative cost function. This PBI approach has demonstrated its ability to retrieve reasonably good conductivity images. However, the reconstructed boundaries and conductivity values of the imaged anomalies are usually not quantitatively resolved. Nevertheless, the PBI approach can provide useful information on the location, the shape and the conductivity of the hydrocarbon reservoir. The second method is the so-called model-based inversion (MBI) algorithm, which uses a priori information on the geometry to reduce the number of unknown parameters and to improve the quality of the reconstructed conductivity image. This MBI approach can

  18. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    Science.gov (United States)

    Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.

    2015-06-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.

  19. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    International Nuclear Information System (INIS)

    Vlasenko, Andrey; Steele, Edward C C; Nimmo-Smith, W Alex M

    2015-01-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements. (paper)

  20. Optimization of externalities using DTM measures: a Pareto optimal multi objective optimization using the evolutionary algorithm SPEA2+

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart

    2010-01-01

    Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.

  1. Sensitivity of SWOT discharge algorithm to measurement errors: Testing on the Sacramento River

    Science.gov (United States)

    Durand, Micheal; Andreadis, Konstantinos; Yoon, Yeosang; Rodriguez, Ernesto

    2013-04-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on the sensitivity of the algorithm accuracy to the uncertainty in AirSWOT measurements of height, width, and slope.

  2. Optimization of diesel engine performance by the Bees Algorithm

    Science.gov (United States)

    Azfanizam Ahmad, Siti; Sunthiram, Devaraj

    2018-03-01

    Biodiesel recently has been receiving a great attention in the world market due to the depletion of the existing fossil fuels. Biodiesel also becomes an alternative for diesel No. 2 fuel which possesses characteristics such as biodegradable and oxygenated. However, there are facts suggested that biodiesel does not have the equivalent features as diesel No. 2 fuel as it has been claimed that the usage of biodiesel giving increment in the brake specific fuel consumption (BSFC). The objective of this study is to find the maximum brake power and brake torque as well as the minimum BSFC to optimize the condition of diesel engine when using the biodiesel fuel. This optimization was conducted using the Bees Algorithm (BA) under specific biodiesel percentage in fuel mixture, engine speed and engine load. The result showed that 58.33kW of brake power, 310.33 N.m of brake torque and 200.29/(kW.h) of BSFC were the optimum value. Comparing to the ones obtained by other algorithm, the BA produced a fine brake power and a better brake torque and BSFC. This finding proved that the BA can be used to optimize the performance of diesel engine based on the optimum value of the brake power, brake torque and BSFC.

  3. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  4. Evaluation of the performance of different firefly algorithms to the ...

    African Journals Online (AJOL)

    of firefly algorithms are applied to solve the nonlinear ELD problem. ... problem using those recent variants and the classical firefly algorithm for different test cases. Efficiency ...... International Journal of Machine. Learning and Computing, Vol.

  5. Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2014-03-01

    Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.

  6. Performances of new reconstruction algorithms for CT-TDLAS (computer tomography-tunable diode laser absorption spectroscopy)

    International Nuclear Information System (INIS)

    Jeon, Min-Gyu; Deguchi, Yoshihiro; Kamimoto, Takahiro; Doh, Deog-Hee; Cho, Gyeong-Rae

    2017-01-01

    Highlights: • The measured data were successfully used for generating absorption spectra. • Four different reconstruction algorithms, ART, MART, SART and SMART were evaluated. • The calculation speed of convergence by the SMART algorithm was the fastest. • SMART was the most reliable algorithm for reconstructing the multiple signals. - Abstract: Recent advent of the tunable lasers made to measure simultaneous temperature and concentration fields of the gases. CT-TDLAS (computed tomography-tunable diode laser absorption spectroscopy) is one the leading techniques for the measurements of temperature and concentration fields of the gases. In CT-TDLAS, the accuracies of the measurement results are strongly dependent upon the reconstruction algorithms. In this study, four different reconstruction algorithms have been tested numerically using experimental data sets measured by thermocouples for combustion fields. Three reconstruction algorithms, MART (multiplicative algebraic reconstruction technique) algorithm, SART (simultaneous algebraic reconstruction technique) algorithm and SMART (simultaneous multiplicative algebraic reconstruction technique) algorithm, are newly proposed for CT-TDLAS in this study. The calculation results obtained by the three algorithms have been compared with previous algorithm, ART (algebraic reconstruction technique) algorithm. Phantom data sets have been generated by the use of thermocouples data obtained in an actual experiment. The data of the Harvard HITRAN table in which the thermo-dynamical properties and the light spectrum of the H_2O are listed were used for the numerical test. The reconstructed temperature and concentration fields were compared with the original HITRAN data, through which the constructed methods are validated. The performances of the four reconstruction algorithms were demonstrated. This method is expected to enhance the practicality of CT-TDLAS.

  7. Synthesis of Algorithm for Range Measurement Equipment to Track Maneuvering Aircraft Using Data on Its Dynamic and Kinematic Parameters

    Science.gov (United States)

    Pudovkin, A. P.; Panasyuk, Yu N.; Danilov, S. N.; Moskvitin, S. P.

    2018-05-01

    The problem of improving automated air traffic control systems is considered through the example of the operation algorithm synthesis for a range measurement channel to track the aircraft, using its kinematic and dynamic parameters. The choice of the state and observation models has been justified, the computer simulations have been performed and the results of the investigated algorithms have been obtained.

  8. Measuring performance at trade shows

    DEFF Research Database (Denmark)

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  9. Performance Measurement of Research Activitities

    DEFF Research Database (Denmark)

    Jakobsen, Morten; Jensen, Tina Blegind; Peyton, Margit Malmmose

    Performance measurements have made their entry into the world of universities. Every research activity is registered in a database and the output measures form the foundation for managerial decisions. The purpose of this chapter is to investigate the registration practices among researchers...... the registrations as a way to be promoted and to legitimise and account for their work; on the other hand, the economic incentives behind ranking lists and bibliographic research indicators threaten the individual researcher's freedom. The findings also show how managers have difficulties in translating back...

  10. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  11. From performance measurement to learning

    DEFF Research Database (Denmark)

    Lewis, Jenny; Triantafillou, Peter

    2012-01-01

    Over the last few decades accountability has accommodated an increasing number of different political, legal and administrative goals. This article focuses on the administrative aspect of accountability and explores the potential perils of a shift from performance measurement to learning. While...... overload. We conclude with some comments on limiting the undesirable consequences of such a move. Points for practitioners Public administrators need to identify and weigh the (human, political and economic) benefits and costs of accountability regimes. While output-focused performance measurement regimes...... to comply with accountability requirements, because of the first point. Third, the costs of compliance are likely to increase because learning requires more participation and dialogue. Fourth, accountability as learning may generate a ‘change for the sake of change’ mentality, creating further government...

  12. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats

    Directory of Open Access Journals (Sweden)

    Vito De Feo

    2017-05-01

    Full Text Available Brain-machine interfaces (BMIs promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  13. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats.

    Science.gov (United States)

    De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro

    2017-01-01

    Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  14. Opcode counting for performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Gara, Alan; Satterfield, David L.; Walkup, Robert E.

    2018-03-20

    Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

  15. A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.

    Science.gov (United States)

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei

    2014-10-01

    Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.

  16. The ATLAS Trigger algorithms upgrade and performance in Run 2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Title: The ATLAS Trigger algorithms upgrade and performance in Run 2 (TDAQ) The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken impr...

  17. Analysis for Performance of Symbiosis Co-evolutionary Algorithm

    OpenAIRE

    根路銘, もえ子; 遠藤, 聡志; 山田, 孝治; 宮城, 隼夫; Nerome, Moeko; Endo, Satoshi; Yamada, Koji; Miyagi, Hayao

    2000-01-01

    In this paper, we analyze the behavior of symbiotic evolution algorithm for the N-Queens problem as benchmark problem for search methods in the field of aritificial intelligence. It is shown that this algorithm improves the ability of evolutionary search method. When the problem is solved by Genetic Algorithms (GAs), an ordinal representation is often used as one of gene conversion methods which convert from phenotype to genotype and reconvert. The representation can hinder occurrence of leth...

  18. Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...

  19. Performance indices and evaluation of algorithms in building energy efficient design optimization

    International Nuclear Information System (INIS)

    Si, Binghui; Tian, Zhichao; Jin, Xing; Zhou, Xin; Tang, Peng; Shi, Xing

    2016-01-01

    Building energy efficient design optimization is an emerging technique that is increasingly being used to design buildings with better overall performance and a particular emphasis on energy efficiency. To achieve building energy efficient design optimization, algorithms are vital to generate new designs and thus drive the design optimization process. Therefore, the performance of algorithms is crucial to achieving effective energy efficient design techniques. This study evaluates algorithms used for building energy efficient design optimization. A set of performance indices, namely, stability, robustness, validity, speed, coverage, and locality, is proposed to evaluate the overall performance of algorithms. A benchmark building and a design optimization problem are also developed. Hooke–Jeeves algorithm, Multi-Objective Genetic Algorithm II, and Multi-Objective Particle Swarm Optimization algorithm are evaluated by using the proposed performance indices and benchmark design problem. Results indicate that no algorithm performs best in all six areas. Therefore, when facing an energy efficient design problem, the algorithm must be carefully selected based on the nature of the problem and the performance indices that matter the most. - Highlights: • Six indices of algorithm performance in building energy optimization are developed. • For each index, its concept is defined and the calculation formulas are proposed. • A benchmark building and benchmark energy efficient design problem are proposed. • The performance of three selected algorithms are evaluated.

  20. A Performance Evaluation of Lightning-NO Algorithms in CMAQ

    Science.gov (United States)

    In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...

  1. Performance measurement and pay for performance

    NARCIS (Netherlands)

    Tuijl, van H.F.J.M.; Kleingeld, P.A.M.; Algera, J.A.; Rutten, M.L.; Sonnentag, S.

    2002-01-01

    This chapter, which takes a (re)design perspective, focuses on the management of employees’ contributions to organisational goal attainment. The control loop for the self-regulation of task performance is used as a frame of reference. Several subsets of design requirements are described and related

  2. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    Science.gov (United States)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  3. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    International Nuclear Information System (INIS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-01-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions. (paper)

  4. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    Science.gov (United States)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  5. Performance Analysis of the Consensus-Based Distributed LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Gonzalo Mateos

    2009-01-01

    Full Text Available Low-cost estimation of stationary signals and reduced-complexity tracking of nonstationary processes are well motivated tasks than can be accomplished using ad hoc wireless sensor networks (WSNs. To this end, a fully distributed least mean-square (D-LMS algorithm is developed in this paper, in which sensors exchange messages with single-hop neighbors to consent on the network-wide estimates adaptively. The novel approach does not require a Hamiltonian cycle or a special bridge subset of sensors, while communications among sensors are allowed to be noisy. A mean-square error (MSE performance analysis of D-LMS is conducted in the presence of a time-varying parameter vector, which adheres to a first-order autoregressive model. For sensor observations that are related to the parameter vector of interest via a linear Gaussian model and after adopting simplifying independence assumptions, exact closed-form expressions are derived for the global and sensor-level MSE evolution as well as its steady-state (s.s. values. Mean and MSE-sense stability of D-LMS are also established. Interestingly, extensive numerical tests demonstrate that for small step-sizes the results accurately extend to the pragmatic setting whereby sensors acquire temporally correlated, not necessarily Gaussian data.

  6. Testing the performance of empirical remote sensing algorithms in the Baltic Sea waters with modelled and in situ reflectance data

    Directory of Open Access Journals (Sweden)

    Martin Ligi

    2017-01-01

    Full Text Available Remote sensing studies published up to now show that the performance of empirical (band-ratio type algorithms in different parts of the Baltic Sea is highly variable. Best performing algorithms are different in the different regions of the Baltic Sea. Moreover, there is indication that the algorithms have to be seasonal as the optical properties of phytoplankton assemblages dominating in spring and summer are different. We modelled 15,600 reflectance spectra using HydroLight radiative transfer model to test 58 previously published empirical algorithms. 7200 of the spectra were modelled using specific inherent optical properties (SIOPs of the open parts of the Baltic Sea in summer and 8400 with SIOPs of spring season. Concentration range of chlorophyll-a, coloured dissolved organic matter (CDOM and suspended matter used in the model simulations were based on the actually measured values available in literature. For each optically active constituent we added one concentration below actually measured minimum and one concentration above the actually measured maximum value in order to test the performance of the algorithms in wider range. 77 in situ reflectance spectra from rocky (Sweden and sandy (Estonia, Latvia coastal areas were used to evaluate the performance of the algorithms also in coastal waters. Seasonal differences in the algorithm performance were confirmed but we found also algorithms that can be used in both spring and summer conditions. The algorithms that use bands available on OLCI, launched in February 2016, are highlighted as this sensor will be available for Baltic Sea monitoring for coming decades.

  7. The performance of the backpropagation algorithm with varying slope of the activation function

    International Nuclear Information System (INIS)

    Bai Yanping; Zhang Haixia; Hao Yilong

    2009-01-01

    Some adaptations are proposed to the basic BP algorithm in order to provide an efficient method to non-linear data learning and prediction. In this paper, an adopted BP algorithm with varying slope of activation function and different learning rates is put forward. The results of experiment indicated that this algorithm can get very good performance of training. We also test the prediction performance of our adopted BP algorithm on 16 instances. We compared the test results to the ones of the BP algorithm with gradient descent momentum and an adaptive learning rate. The results indicate this adopted BP algorithm gives best performance (100%) for test example, which conclude this adopted BP algorithm produces a smoothed reconstruction that learns better to new prediction function values than the BP algorithm improved with momentum.

  8. Performance of a rain retrieval algorithm using TRMM data in the Eastern Mediterranean

    Directory of Open Access Journals (Sweden)

    D. Katsanos

    2006-01-01

    Full Text Available This study aims to make a regional characterization of the performance of the rain retrieval algorithm BRAIN. This algorithm estimates the rain rate from brightness temperatures measured by the TRMM Microwave Imager (TMI onboard the TRMM satellite. In this stage of the study, a comparison between the rain estimated from Precipitation Radar (PR onboard TRMM (2A25 version 5 and the rain retrieved by the BRAIN algorithm is presented, for about 30 satellite overpasses over the Central and Eastern Mediterranean during the period October 2003–March 2004, in order to assess the behavior of the algorithm in the Eastern Mediterranean region. BRAIN was built and tested using PR rain estimates distributed randomly over the whole TRMM sampling region. Characterization of the differences between PR and BRAIN over a specific region is thus interesting because it might show some local trend for one or the other of the instrument. The checking of BRAIN results against the PR rain-estimate appears to be consistent with former results i.e. a somewhat marked discrepancy for the highest rain rates. This difference arises from a known problem that affect rain retrieval based on passive microwave radiometers measurements, but some of the higher radar rain rates could also be questioned. As an independent test, a good correlation between the rain retrieved by BRAIN and lighting data (obtained by the UK Met. Office long range detection system is also emphasized in the paper.

  9. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  10. Invited Review Article: Measurement uncertainty of linear phase-stepping algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Erwin [EMPA, Laboratory Electronics/Metrology/Reliability, Ueberlandstrasse 129, CH-8600 Duebendorf (Switzerland); Burke, Jan [Australian Centre for Precision Optics, CSIRO (Commonwealth Scientific and Industrial Research Organisation) Materials Science and Engineering, P.O. Box 218, Lindfield, NSW 2070 (Australia)

    2011-06-15

    Phase retrieval techniques are widely used in optics, imaging and electronics. Originating in signal theory, they were introduced to interferometry around 1970. Over the years, many robust phase-stepping techniques have been developed that minimize specific experimental influence quantities such as phase step errors or higher harmonic components of the signal. However, optimizing a technique for a specific influence quantity can compromise its performance with regard to others. We present a consistent quantitative analysis of phase measurement uncertainty for the generalized linear phase stepping algorithm with nominally equal phase stepping angles thereby reviewing and generalizing several results that have been reported in literature. All influence quantities are treated on equal footing, and correlations between them are described in a consistent way. For the special case of classical N-bucket algorithms, we present analytical formulae that describe the combined variance as a function of the phase angle values. For the general Arctan algorithms, we derive expressions for the measurement uncertainty averaged over the full 2{pi}-range of phase angles. We also give an upper bound for the measurement uncertainty which can be expressed as being proportional to an algorithm specific factor. Tabular compilations help the reader to quickly assess the uncertainties that are involved with his or her technique.

  11. MixSim : An R Package for Simulating Data to Study Performance of Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    Volodymyr Melnykov

    2012-11-01

    Full Text Available The R package MixSim is a new tool that allows simulating mixtures of Gaussian distributions with different levels of overlap between mixture components. Pairwise overlap, defined as a sum of two misclassification probabilities, measures the degree of interaction between components and can be readily employed to control the clustering complexity of datasets simulated from mixtures. These datasets can then be used for systematic performance investigation of clustering and finite mixture modeling algorithms. Among other capabilities of MixSim, there are computing the exact overlap for Gaussian mixtures, simulating Gaussian and non-Gaussian data, simulating outliers and noise variables, calculating various measures of agreement between two partitionings, and constructing parallel distribution plots for the graphical display of finite mixture models. All features of the package are illustrated in great detail. The utility of the package is highlighted through a small comparison study of several popular clustering algorithms.

  12. Advanced Receiver Design for Mitigating Multiple RF Impairments in OFDM Systems: Algorithms and RF Measurements

    Directory of Open Access Journals (Sweden)

    Adnan Kiayani

    2012-01-01

    Full Text Available Direct-conversion architecture-based orthogonal frequency division multiplexing (OFDM systems are troubled by impairments such as in-phase and quadrature-phase (I/Q imbalance and carrier frequency offset (CFO. These impairments are unavoidable in any practical implementation and severely degrade the obtainable link performance. In this contribution, we study the joint impact of frequency-selective I/Q imbalance at both transmitter and receiver together with channel distortions and CFO error. Two estimation and compensation structures based on different pilot patterns are proposed for coping with such impairments. The first structure is based on preamble pilot pattern while the second one assumes a sparse pilot pattern. The proposed estimation/compensation structures are able to separate the individual impairments, which are then compensated in the reverse order of their appearance at the receiver. We present time-domain estimation and compensation algorithms for receiver I/Q imbalance and CFO and propose low-complexity algorithms for the compensation of channel distortions and transmitter IQ imbalance. The performance of the compensation algorithms is investigated with computer simulations as well as with practical radio frequency (RF measurements. The performance results indicate that the proposed techniques provide close to the ideal performance both in simulations and measurements.

  13. The algorithmic performance of J-Tpeak for drug safety clinical trial.

    Science.gov (United States)

    Chien, Simon C; Gregg, Richard E

    The interval from J-point to T-wave peak (JTp) in ECG is a new biomarker able to identify drugs that prolong the QT interval but have different ion channel effects. If JTp is not prolonged, the prolonged QT may be associated with multi ion channel block that may have low torsade de pointes risk. From the automatic ECG measurement perspective, accurate and repeatable measurement of JTp involves different challenges than QT. We evaluated algorithm performance and JTp challenges using the Philips DXL diagnostic 12/16/18-lead algorithm. Measurement of JTp represents a different use model. Standard use of corrected QT interval is clinical risk assessment on patients with cardiac disease or suspicion of heart disease. Drug safety trials involve a very different population - young healthy subjects - who commonly have J-waves, notches and slurs. Drug effects include difficult and unusual morphology such as flat T-waves, gentle notches, and multiple T-wave peaks. The JTp initiative study provided ECGs collected from 22 young subjects (11 males and females) in randomized testing of dofetilide, quinidine, ranolazine, verapamil and placebo. We compare the JTp intervals between DXL algorithm and the FDA published measurements. The lead wise, vector-magnitude (VM), root-mean-square (RMS) and principal-component-analysis (PCA) representative beats were used to measure JTp and QT intervals. We also implemented four different methods for T peak detection for comparison. We found that JTp measurements were closer to the reference for combined leads RMS and PCA than individual leads. Differences in J-point location led to part of the JTp measurement difference because of the high prevalence of J-waves, notches and slurs. Larger differences were noted for drug effect causing multiple distinct T-wave peaks (Tp). The automated algorithm chooses the later peak while the reference was the earlier peak. Choosing among different algorithmic strategies in T peak measurement results in the

  14. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  15. Algorithms for diagnostics of the measuring channels and technological equipment at NPP with WWER-1000

    International Nuclear Information System (INIS)

    Vysotskij, V.G.

    1997-01-01

    An algorithm for diagnostics of the state of measuring channels of an information computer system with usage of analysis of statistical channel characteristics is presented. An algorithm for testing the generalized state of the NPP technological equipment is proposed

  16. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  17. Coupling two iteratives algorithms for density measurements by computerized tomography

    International Nuclear Information System (INIS)

    Silva, L.E.M.C.; Santos, C.A.C.; Borges, J.C.; Frenkel, A.D.B.; Rocha, G.M.

    1986-01-01

    This work develops a study for coupling two iteratives algotithms for density measurements by computerized tomography. Tomographies have been obtained with an automatized prototype, controled by a microcomputer, projected and assembled in the Nuclear Instrumentation Laboratory, at COPPE/UFRJ. Results show a good performance of the tomographic system, and demonstrate the validity of the method of calculus adopted. (Author) [pt

  18. A domain specific language for performance portable molecular dynamics algorithms

    Science.gov (United States)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  19. Performance measurement for information systems: Industry perspectives

    Science.gov (United States)

    Bishop, Peter C.; Yoes, Cissy; Hamilton, Kay

    1992-01-01

    Performance measurement has become a focal topic for information systems (IS) organizations. Historically, IS performance measures have dealt with the efficiency of the data processing function. Today, the function of most IS organizations goes beyond simple data processing. To understand how IS organizations have developed meaningful performance measures that reflect their objectives and activities, industry perspectives on IS performance measurement was studied. The objectives of the study were to understand the state of the practice in IS performance techniques for IS performance measurement; to gather approaches and measures of actual performance measures used in industry; and to report patterns, trends, and lessons learned about performance measurement to NASA/JSC. Examples of how some of the most forward looking companies are shaping their IS processes through measurement is provided. Thoughts on the presence of a life-cycle to performance measures development and a suggested taxonomy for performance measurements are included in the appendices.

  20. Empirical measurements on a Sesotho tone labeling algorithm

    CSIR Research Space (South Africa)

    Raborife, M

    2012-05-01

    Full Text Available This paper discusses the empirical assessments employed on two versions of a Sesotho tone labeling algorithm. This algorithm uses linguistically-defined Sesotho tonal rules to predict the tone labels on the syllables of Sesotho words. The two...

  1. Self-karaoke patterns: an interactive audio-visual system for handsfree live algorithm performance

    OpenAIRE

    Eldridge, Alice

    2014-01-01

    Self-karaoke Patterns, is an audiovisual study for improvised cello and live algorithms. The work is motivated in part by addressing the practical needs of the performer in ‘handsfree’ live algorithm contexts and in part an aesthetic concern with resolving the tension between conceptual dedication to autonomous algorithms and musical dedication to coherent performance. The elected approach is inspired by recent work investing the role of ‘shape’ in musical performance.

  2. 45 CFR 305.2 - Performance measures.

    Science.gov (United States)

    2010-10-01

    ... PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.2 Performance measures. (a) The child support incentive system measures State performance levels in five program areas... 45 Public Welfare 2 2010-10-01 2010-10-01 false Performance measures. 305.2 Section 305.2 Public...

  3. Empirical study of self-configuring genetic programming algorithm performance and behaviour

    International Nuclear Information System (INIS)

    KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F. Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Semenkin, E; KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F. Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Semenkina, M

    2015-01-01

    The behaviour of the self-configuring genetic programming algorithm with a modified uniform crossover operator that implements a selective pressure on the recombination stage, is studied over symbolic programming problems. The operator's probabilistic rates interplay is studied and the role of operator variants on algorithm performance is investigated. Algorithm modifications based on the results of investigations are suggested. The performance improvement of the algorithm is demonstrated by the comparative analysis of suggested algorithms on the benchmark and real world problems

  4. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    Science.gov (United States)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  5. Performance Estimation and Fault Diagnosis Based on Levenberg–Marquardt Algorithm for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Junjie Lu

    2018-01-01

    Full Text Available Establishing the schemes of accurate and computationally efficient performance estimation and fault diagnosis for turbofan engines has become a new research focus and challenges. It is able to increase reliability and stability of turbofan engine and reduce the life cycle costs. Accurate estimation of turbofan engine performance counts on thoroughly understanding the components’ performance, which is described by component characteristic maps and the fault of each component can be regarded as the change of characteristic maps. In this paper, a novel method based on a Levenberg–Marquardt (LM algorithm is proposed to enhance the fidelity of the performance estimation and the credibility of the fault diagnosis for the turbofan engine. The presented method utilizes the LM algorithm to figure out the operating point in the characteristic maps, preparing for performance estimation and fault diagnosis. The accuracy of the proposed method is evaluated for estimating performance parameters in the transient case with Rayleigh process noise and Gaussian measurement noise. The comparison among the extended Kalman filter (EKF method, the particle filter (PF method and the proposed method is implemented in the abrupt fault case and the gradual degeneration case and it has been shown that the proposed method has the capability to lead to more accurate result for performance estimation and fault diagnosis of turbofan engine than current popular EKF and PF diagnosis methods.

  6. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    Science.gov (United States)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  7. Computational performance of a projection and rescaling algorithm

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2018-01-01

    This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...

  8. VPN (Virtual Private Network) Performance Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, Calixto; Goncalves, Joao G.M.; Sequeira, Vitor [Joint Research Centre, Ispra (Italy). Inst. for the Protection and Security of the Citizen; Vandaele, Roland; Meylemans, Paul [European Commission, DG-TREN (Luxembourg)

    2003-05-01

    Virtual Private Networks (VPN) is an important technology allowing for secure communications through insecure transmission media (i.e., Internet) by adding authentication and encryption to the existing protocols. This paper describes some VPN performance indicators measured over international communication links. An ISDN based VPN link was established between the Joint Research Centre, Ispra site, Italy, and EURATOM Safeguards in Luxembourg. This link connected two EURATOM Safeguards FAST surveillance stations, and used different vendor solutions hardware (Cisco router 1720 and Nokia CC-500 Gateway). To authenticate and secure this international link, we have used several methods at the different levels of the seven-layered ISO network protocol stack (i.e., Callback feature, CHAP - Challenge Handshake Protocol - authentication protocol). The tests made involved the use of different encryption algorithms and the way session secret keys are periodically renewed, considering these elements influence significantly the transmission throughput. Future tests will include the use of a wide variety of wireless media transmission and terminal equipment technologies, in particular PDAs (Personal Digital Assistants) and Notebook PCs. These tests aim at characterising the functionality of VPNs whenever field inspectors wish to contact headquarters to access information from a central archive database or transmit local measurements or documents. These technologies cover wireless transmission needs at different geographical scales: roombased level Bluetooth, floor or building level Wi-Fi and region or country level GPRS.

  9. Energy Demodulation Algorithm for Flow Velocity Measurement of Oil-Gas-Water Three-Phase Flow

    Directory of Open Access Journals (Sweden)

    Yingwei Li

    2014-01-01

    Full Text Available Flow velocity measurement was an important research of oil-gas-water three-phase flow parameter measurements. In order to satisfy the increasing demands for flow detection technology, the paper presented a gas-liquid phase flow velocity measurement method which was based on energy demodulation algorithm combing with time delay estimation technology. First, a gas-liquid phase separation method of oil-gas-water three-phase flow based on energy demodulation algorithm and blind signal separation technology was proposed. The separation of oil-gas-water three-phase signals which were sampled by conductance sensor performed well, so the gas-phase signal and the liquid-phase signal were obtained. Second, we used the time delay estimation technology to get the delay time of gas-phase signals and liquid-phase signals, respectively, and the gas-phase velocity and the liquid-phase velocity were derived. At last, the experiment was performed at oil-gas-water three-phase flow loop, and the results indicated that the measurement errors met the need of velocity measurement. So it provided a feasible method for gas-liquid phase velocity measurement of the oil-gas-water three-phase flow.

  10. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  11. Multi-instance dictionary learning via multivariate performance measure optimization

    KAUST Repository

    Wang, Jim Jing-Yan

    2016-12-29

    The multi-instance dictionary plays a critical role in multi-instance data representation. Meanwhile, different multi-instance learning applications are evaluated by specific multivariate performance measures. For example, multi-instance ranking reports the precision and recall. It is not difficult to see that to obtain different optimal performance measures, different dictionaries are needed. This observation motives us to learn performance-optimal dictionaries for this problem. In this paper, we propose a novel joint framework for learning the multi-instance dictionary and the classifier to optimize a given multivariate performance measure, such as the F1 score and precision at rank k. We propose to represent the bags as bag-level features via the bag-instance similarity, and learn a classifier in the bag-level feature space to optimize the given performance measure. We propose to minimize the upper bound of a multivariate loss corresponding to the performance measure, the complexity of the classifier, and the complexity of the dictionary, simultaneously, with regard to both the dictionary and the classifier parameters. In this way, the dictionary learning is regularized by the performance optimization, and a performance-optimal dictionary is obtained. We develop an iterative algorithm to solve this minimization problem efficiently using a cutting-plane algorithm and a coordinate descent method. Experiments on multi-instance benchmark data sets show its advantage over both traditional multi-instance learning and performance optimization methods.

  12. Multi-instance dictionary learning via multivariate performance measure optimization

    KAUST Repository

    Wang, Jim Jing-Yan; Tsang, Ivor Wai-Hung; Cui, Xuefeng; Lu, Zhiwu; Gao, Xin

    2016-01-01

    The multi-instance dictionary plays a critical role in multi-instance data representation. Meanwhile, different multi-instance learning applications are evaluated by specific multivariate performance measures. For example, multi-instance ranking reports the precision and recall. It is not difficult to see that to obtain different optimal performance measures, different dictionaries are needed. This observation motives us to learn performance-optimal dictionaries for this problem. In this paper, we propose a novel joint framework for learning the multi-instance dictionary and the classifier to optimize a given multivariate performance measure, such as the F1 score and precision at rank k. We propose to represent the bags as bag-level features via the bag-instance similarity, and learn a classifier in the bag-level feature space to optimize the given performance measure. We propose to minimize the upper bound of a multivariate loss corresponding to the performance measure, the complexity of the classifier, and the complexity of the dictionary, simultaneously, with regard to both the dictionary and the classifier parameters. In this way, the dictionary learning is regularized by the performance optimization, and a performance-optimal dictionary is obtained. We develop an iterative algorithm to solve this minimization problem efficiently using a cutting-plane algorithm and a coordinate descent method. Experiments on multi-instance benchmark data sets show its advantage over both traditional multi-instance learning and performance optimization methods.

  13. Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan

    Directory of Open Access Journals (Sweden)

    Senol Celik

    Full Text Available ABSTRACT The present study aimed at comparing predictive performance of some data mining algorithms (CART, CHAID, Exhaustive CHAID, MARS, MLP, and RBF in biometrical data of Mengali rams. To compare the predictive capability of the algorithms, the biometrical data regarding body (body length, withers height, and heart girth and testicular (testicular length, scrotal length, and scrotal circumference measurements of Mengali rams in predicting live body weight were evaluated by most goodness of fit criteria. In addition, age was considered as a continuous independent variable. In this context, MARS data mining algorithm was used for the first time to predict body weight in two forms, without (MARS_1 and with interaction (MARS_2 terms. The superiority order in the predictive accuracy of the algorithms was found as CART > CHAID ≈ Exhaustive CHAID > MARS_2 > MARS_1 > RBF > MLP. Moreover, all tested algorithms provided a strong predictive accuracy for estimating body weight. However, MARS is the only algorithm that generated a prediction equation for body weight. Therefore, it is hoped that the available results might present a valuable contribution in terms of predicting body weight and describing the relationship between the body weight and body and testicular measurements in revealing breed standards and the conservation of indigenous gene sources for Mengali sheep breeding. Therefore, it will be possible to perform more profitable and productive sheep production. Use of data mining algorithms is useful for revealing the relationship between body weight and testicular traits in describing breed standards of Mengali sheep.

  14. Evaluation of the performance of different firefly algorithms to the ...

    African Journals Online (AJOL)

    To solve the economic load dispatch problem, traditional and intelligent techniques were applied. Researchers have shown interest in utilizing metaheuristic methods to solve complex optimization problems in real life applications. In this paper, three alternatives of firefly algorithms are applied to solve the nonlinear ELD ...

  15. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  17. Brush seal performance measurement system

    OpenAIRE

    Aksoy, Serdar; Akşit, Mahmut Faruk; Aksit, Mahmut Faruk; Duran, Ertuğrul Tolga; Duran, Ertugrul Tolga

    2009-01-01

    Brush seals are rapidly replacing conventional labyrinth seals in turbomachinery applications. Upon pressure application, seal stiffness increases drastically due to frictional bristle interlocking. Operating stiffness is critical to determine seal wear life. Typically, seal stiffness is measured by pressing a curved shoe to brush bore. The static-unpressurized measurement is extrapolated to pressurized and high speed operating conditions. This work presents a seal stiffness measurement syste...

  18. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  19. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  20. Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.

    Science.gov (United States)

    Gao, Jian; Moran, Eileen; Almenoff, Peter L

    2018-06-01

    Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.

  1. A Multi-objective PMU Placement Method Considering Observability and Measurement Redundancy using ABC Algorithm

    Directory of Open Access Journals (Sweden)

    KULANTHAISAMY, A.

    2014-05-01

    Full Text Available This paper presents a Multi- objective Optimal Placement of Phasor Measurement Units (MOPP method in large electric transmission systems. It is proposed for minimizing the number of Phasor Measurement Units (PMUs for complete system observability and maximizing the measurement redundancy of the system, simultaneously. The measurement redundancy means that number of times a bus is able to monitor more than once by PMUs set. A higher level of measurement redundancy can maximize the total system observability and it is desirable for a reliable power system state estimation. Therefore, simultaneous optimization of the two conflicting objectives are performed using a binary coded Artificial Bee Colony (ABC algorithm. The complete observability of the power system is first prepared and then, single line loss contingency condition is considered to the main model. The efficiency of the proposed method is validated on IEEE 14, 30, 57 and 118 bus test systems. The valuable approach of ABC algorithm is demonstrated in finding the optimal number of PMUs and their locations by comparing the performance with earlier works.

  2. Mobility and reliability performance measurement.

    Science.gov (United States)

    2013-06-01

    This project grew out of the fact that mobility was identified early on as one of the key performance focus areas of NCDOTs : strategic transformation effort. The Transformation Management Team (TMT) established a TMT Mobility Workstream Team : in...

  3. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  4. A high performance hardware implementation image encryption with AES algorithm

    Science.gov (United States)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  5. Comparison of the inversion algorithms applied to the ozone vertical profile retrieval from SCIAMACHY limb measurements

    Directory of Open Access Journals (Sweden)

    A. Rozanov

    2007-09-01

    Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.

  6. Evaluation of the performance of existing non-laboratory based cardiovascular risk assessment algorithms

    Science.gov (United States)

    2013-01-01

    Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202

  7. A time reversal algorithm in acoustic media with Dirac measure approximations

    Science.gov (United States)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  8. Traffic Management Systems Performance Measurement: Final Report

    OpenAIRE

    Banks, James H.; Kelly, Gregory

    1997-01-01

    This report documents a study of performance measurement for Transportation Management Centers (TMCs). Performance measurement requirements were analyzed, data collection and management techniques were investigated, and case study traffic data system improvement plans were prepared for two Caltrans districts.

  9. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    Science.gov (United States)

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  10. Performance measurement and insurance liabilities

    NARCIS (Netherlands)

    Plantinga, A; Huijgen, C

    2001-01-01

    In this article, the authors develop an attribution framework for evaluating the investment performance of institutional investors such as insurance companies. The model is useful in identifying the investment skills of insurance companies. This is accomplished by developing a dual benchmark for the

  11. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela

    2006-01-01

    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  12. Optimum Performance-Based Seismic Design Using a Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    S. Talatahari

    2014-01-01

    Full Text Available A hybrid optimization method is presented to optimum seismic design of steel frames considering four performance levels. These performance levels are considered to determine the optimum design of structures to reduce the structural cost. A pushover analysis of steel building frameworks subject to equivalent-static earthquake loading is utilized. The algorithm is based on the concepts of the charged system search in which each agent is affected by local and global best positions stored in the charged memory considering the governing laws of electrical physics. Comparison of the results of the hybrid algorithm with those of other metaheuristic algorithms shows the efficiency of the hybrid algorithm.

  13. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  14. Eva versus Other Performance Measures

    OpenAIRE

    Hechmi Soumaya

    2013-01-01

    Create value not only intended to satisfy shareholders. This is also the way to ensure the ability of the company to ensure its sustainability and finance its growth. The company will not attract new capital if it destroys value. "The concept of value creation is none other than the intersection of strategy (create value) and technique (evaluate the company)"(Powilewicz, 2002). The basic idea behind the different measures of value creation by a company is that a company creates value for its ...

  15. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  17. Internet Performance and Reliability Measurements

    International Nuclear Information System (INIS)

    Cottrell, Les

    2003-01-01

    Collaborative HEP research is dependent on good Internet connectivity. Although most local- and wide-area networks are carefully watched, there is little monitoring of connections that cross many networks. This paper describes work in progress at several sites to monitor Internet end-to-end performance between hundreds of HEP sites worldwide. At each collection site, ICMP ping packets are automatically sent periodically to sites of interest. The data is recorded and made available to analysis nodes, which collect the data from multiple collection sites and provide analysis and graphing. Future work includes improving the efficiency and accuracy of ping data collection

  18. Performance analysis of algorithms for retrieval of magnetic resonance images for interactive teleradiology

    Science.gov (United States)

    Atkins, M. Stella; Hwang, Robert; Tang, Simon

    2001-05-01

    We have implemented a prototype system consisting of a Java- based image viewer and a web server extension component for transmitting Magnetic Resonance Images (MRI) to an image viewer, to test the performance of different image retrieval techniques. We used full-resolution images, and images compressed/decompressed using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm. We examined the SPIHT decompression algorithm using both non- progressive and progressive transmission, focusing on the running times of the algorithm, client memory usage and garbage collection. We also compared the Java implementation with a native C++ implementation of the non- progressive SPIHT decompression variant. Our performance measurements showed that for uncompressed image retrieval using a 10Mbps Ethernet, a film of 16 MR images can be retrieved and displayed almost within interactive times. The native C++ code implementation of the client-side decoder is twice as fast as the Java decoder. If the network bandwidth is low, the high communication time for retrieving uncompressed images may be reduced by use of SPIHT-compressed images, although the image quality is then degraded. To provide diagnostic quality images, we also investigated the retrieval of up to 3 images on a MR film at full-resolution, using progressive SPIHT decompression. The Java-based implementation of progressive decompression performed badly, mainly due to the memory requirements for maintaining the image states, and the high cost of execution of the Java garbage collector. Hence, in systems where the bandwidth is high, such as found in a hospital intranet, SPIHT image compression does not provide advantages for image retrieval performance.

  19. Measurement methods and interpretation algorithms for the determination of the remaining lifetime of the electrical insulation

    Directory of Open Access Journals (Sweden)

    Engster F.

    2005-12-01

    Full Text Available The paper presents a set of on-line and off-line measuring methods for the dielectric parameters of the electric insulation as well as the method of results interpretation aimed to determine the occurence of a damage and to set up the its speed of evolution. These results lead finally to the determination of the life time under certain imposed safety conditions. The interpretation of the measurement results is done based on analytical algorithms allowing also the calculation of the index of correlation between the real results and the mathematical interpolation. It is performed a comparative analysis between different measuring and interpretation methods. There are considered certain events occurred during the measurement performance including their causes. The working-out of the analytical methods has been improved during the during the dielectric measurements performance for about 25 years at a number of 140 turbo and hydro power plants. Finally it is proposed a measurement program to be applied and which will allow the correlation of the on-line and off-line dielectric measurement obtaining thus a reliable technology of high accuracy level for the estimation of the available lifetime of electrical insulation.

  20. ALGORITHMS FOR OPTIMIZATION OF SYSYTEM PERFORMANCE IN LAYERED DETECTION SYSTEMS UNDER DETECTOR COORELATION

    International Nuclear Information System (INIS)

    Wood, Thomas W.; Heasler, Patrick G.; Daly, Don S.

    2010-01-01

    Almost all of the 'architectures' for radiation detection systems in Department of Energy (DOE) and other USG programs rely on some version of layered detector deployment. Efficacy analyses of layered (or more generally extended) detection systems in many contexts often assume statistical independence among detection events and thus predict monotonically increasing system performance with the addition of detection layers. We show this to be a false conclusion for the ROC curves typical of most current technology gamma detectors, and more generally show that statistical independence is often an unwarranted assumption for systems in which there is ambiguity about the objects to be detected. In such systems, a model of correlation among detection events allows optimization of system algorithms for interpretation of detector signals. These algorithms are framed as optimal discriminant functions in joint signal space, and may be applied to gross counting or spectroscopic detector systems. We have shown how system algorithms derived from this model dramatically improve detection probabilities compared to the standard serial detection operating paradigm for these systems. These results would not surprise anyone who has confronted the problem of correlated errors (or failure rates) in the analogous contexts, but is seems to be largely underappreciated among those analyzing the radiation detection problem - independence is widely assumed and experimental studies typical fail to measure correlation. This situation, if not rectified, will lead to several unfortunate results. Including overconfidence in system efficacy, overinvestment in layers of similar technology, and underinvestment in diversity among detection assets.

  1. Performance Evaluation of Spectral Clustering Algorithm using Various Clustering Validity Indices

    OpenAIRE

    M. T. Somashekara; D. Manjunatha

    2014-01-01

    In spite of the popularity of spectral clustering algorithm, the evaluation procedures are still in developmental stage. In this article, we have taken benchmarking IRIS dataset for performing comparative study of twelve indices for evaluating spectral clustering algorithm. The results of the spectral clustering technique were also compared with k-mean algorithm. The validity of the indices was also verified with accuracy and (Normalized Mutual Information) NMI score. Spectral clustering algo...

  2. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    Science.gov (United States)

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (psorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (palgorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Performance Evaluation of New Joint EDF-RM Scheduling Algorithm for Real Time Distributed System

    Directory of Open Access Journals (Sweden)

    Rashmi Sharma

    2014-01-01

    Full Text Available In Real Time System, the achievement of deadline is the main target of every scheduling algorithm. Earliest Deadline First (EDF, Rate Monotonic (RM, and least Laxity First are some renowned algorithms that work well in their own context. As we know, there is a very common problem Domino's effect in EDF that is generated due to overloading condition (EDF is not working well in overloading situation. Similarly, performance of RM is degraded in underloading condition. We can say that both algorithms are complements of each other. Deadline missing in both events happens because of their utilization bounding strategy. Therefore, in this paper we are proposing a new scheduling algorithm that carries through the drawback of both existing algorithms. Joint EDF-RM scheduling algorithm is implemented in global scheduler that permits task migration mechanism in between processors in the system. In order to check the improved behavior of proposed algorithm we perform simulation. Results are achieved and evaluated in terms of Success Ratio (SR, Average CPU Utilization (ECU, Failure Ratio (FR, and Maximum Tardiness parameters. In the end, the results are compared with the existing (EDF, RM, and D_R_EDF algorithms. It has been shown that the proposed algorithm performs better during overloading condition as well in underloading condition.

  4. PRINCIPLES OF THE SUPPLY CHAIN PERFORMANCE MEASUREMENT

    OpenAIRE

    BEATA ŒLUSARCZYK; SEBASTIAN KOT

    2012-01-01

    Measurement of performance in every business management is a crucial activity allowing for effectiveness increase. The lack of suitable performance measurement is especially noticed in complex systems as supply chains. Responsible persons cannot manage effectively without suitable set of measures those are base for comparison to previous data or effects of other supply chain functioning. The analysis shows that it is very hard to find balanced set of supply chain performance measures those sh...

  5. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W. [Univ. of California, Berkeley, CA (United States)

    2017-09-14

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a

  6. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Martin-Haugh, Stewart

    2014-01-01

    A description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for electrons is presented. The ATLAS trigger software will be restructured from two software levels into a single stage which poses a big challenge for the trigger algorithms in terms of execution time and maintaining the physics performance. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are discussed, utilising the planned merging of the current two stages of the ATLAS trigger.

  7. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out...

  8. Performance tests of the Kramers equation and boson algorithms for simulations of QCD

    International Nuclear Information System (INIS)

    Jansen, K.; Liu Chuan; Jegerlehner, B.

    1995-12-01

    We present a performance comparison of the Kramers equation and the boson algorithms for simulations of QCD with two flavors of dynamical Wilson fermions and gauge group SU(2). Results are obtained on 6 3 12, 8 3 12 and 16 4 lattices. In both algorithms a number of optimizations are installed. (orig.)

  9. Performance Measurement in Global Product Development

    DEFF Research Database (Denmark)

    Taylor, Thomas Paul; Ahmed-Kristensen, Saeema

    2013-01-01

    there is a requirement for the process to be monitored and measured relative to the business strategy of an organisation. It was found that performance measurement is a process that helps achieve sustainable business success, encouraging a learning culture within organisations. To this day, much of the research into how...... performance is measured has focussed on the process of product development. However, exploration of performance measurement related to global product development is relatively unexplored and a need for further research is evident. This paper contributes towards understanding how performance is measured...

  10. Performance Evaluation of Bidding-Based Multi-Agent Scheduling Algorithms for Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Antonio Gordillo

    2014-10-01

    Full Text Available Artificial Intelligence techniques have being applied to many problems in manufacturing systems in recent years. In the specific field of manufacturing scheduling many studies have been published trying to cope with the complexity of the manufacturing environment. One of the most utilized approaches is (multi agent-based scheduling. Nevertheless, despite the large list of studies reported in this field, there is no resource or scientific study on the performance measure of this type of approach under very common and critical execution situations. This paper focuses on multi-agent systems (MAS based algorithms for task allocation, particularly in manufacturing applications. The goal is to provide a mechanism to measure the performance of agent-based scheduling approaches for manufacturing systems under key critical situations such as: dynamic environment, rescheduling, and priority change. With this mechanism it will be possible to simulate critical situations and to stress the system in order to measure the performance of a given agent-based scheduling method. The proposed mechanism is a pioneering approach for performance evaluation of bidding-based MAS approaches for manufacturing scheduling. The proposed method and evaluation methodology can be used to run tests in different manufacturing floors since it is independent of the workshop configuration. Moreover, the evaluation results presented in this paper show the key factors and scenarios that most affect the market-like MAS approaches for manufacturing scheduling.

  11. Health Plan Performance Measurement within Medicare Subvention.

    Science.gov (United States)

    1998-06-01

    the causes of poor performance (Siren & Laffel, 1996). Although outcomes measures such as nosocomial infection rates, admission rates for select...defined. Traditional outcomes measures include infection rates, morbidity, and mortality. The problem with these traditional measures is... Maternal /Child Care Indicators Nursing Staffing Indicators Outcome Indicators Technical Outcomes Plan Performance Stability of Health Plan

  12. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    Directory of Open Access Journals (Sweden)

    Chun-Wei Tsai

    2014-01-01

    Full Text Available This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  13. Comparison Performance of Genetic Algorithm and Ant Colony Optimization in Course Scheduling Optimizing

    Directory of Open Access Journals (Sweden)

    Imam Ahmad Ashari

    2016-11-01

    Full Text Available Scheduling problems at the university is a complex type of scheduling problems. The scheduling process should be carried out at every turn of the semester's. The core of the problem of scheduling courses at the university is that the number of components that need to be considered in making the schedule, some of the components was made up of students, lecturers, time and a room with due regard to the limits and certain conditions so that no collision in the schedule such as mashed room, mashed lecturer and others. To resolve a scheduling problem most appropriate technique used is the technique of optimization. Optimization techniques can give the best results desired. Metaheuristic algorithm is an algorithm that has a lot of ways to solve the problems to the very limit the optimal solution. In this paper, we use a genetic algorithm and ant colony optimization algorithm is an algorithm metaheuristic to solve the problem of course scheduling. The two algorithm will be tested and compared to get performance is the best. The algorithm was tested using data schedule courses of the university in Semarang. From the experimental results we conclude that the genetic algorithm has better performance than the ant colony optimization  algorithm in solving the case of course scheduling.

  14. Telephony Over IP: A QoS Measurement-Based End to End Control Algorithm

    Directory of Open Access Journals (Sweden)

    Luigi Alcuri

    2004-12-01

    Full Text Available This paper presents a method for admitting voice calls in Telephony over IP (ToIP scenarios. This method, called QoS-Weighted CAC, aims to guarantee Quality of Service to telephony applications. We use a measurement-based call admission control algorithm, which detects network congested links through a feedback on overall link utilization. This feedback is based on the measures of packet delivery latencies related to voice over IP connections at the edges of the transport network. In this way we introduce a close loop control method, which is able to auto-adapt the quality margin on the basis of network load and specific service level requirements. Moreover we evaluate the difference in performance achieved by different Queue management configurations to guarantee Quality of Service to telephony applications, in which our goal was to evaluate the weight of edge router queue configuration in complex and real-like telephony over IP scenario. We want to compare many well-know queue scheduling algorithms, such as SFQ, WRR, RR, WIRR, and Priority. This comparison aims to locate queue schedulers in a more general control scheme context where different elements such as DiffServ marking and Admission control algorithms contribute to the overall Quality of Service required by real-time voice conversations. By means of software simulations we want to compare this solution with other call admission methods already described in scientific literature in order to locate this proposed method in a more general control scheme context. On the basis of the results we try to evidence the possible advantages of this QoS-Weighted solution in comparison with other similar CAC solutions ( in particular Measured Sum, Bandwidth Equivalent with Hoeffding Bounds, and Simple Measure CAC, on the planes of complexity, stability, management, tune-ability to service level requirements, and compatibility with actual network implementation.

  15. Performance Assessment of Hybrid Data Fusion and Tracking Algorithms

    DEFF Research Database (Denmark)

    Sand, Stephan; Mensing, Christian; Laaraiedh, Mohamed

    2009-01-01

    accuracy, whereas the received signal strength measurements of Wi-Fi hotspots give more accurate results if coverage is available. Finally, for large scale outdoor scenarios, cellular TDOA measurements can support global navigation satellite systems (GNSSs) especially in critical scenarios, where only...

  16. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  17. Performance assessment of electric power generations using an adaptive neural network algorithm

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Anvari, M.; Saberi, M.

    2007-01-01

    Efficiency frontier analysis has been an important approach of evaluating firms' performance in private and public sectors. There have been many efficiency frontier analysis methods reported in the literature. However, the assumptions made for each of these methods are restrictive. Each of these methodologies has its strength as well as major limitations. This study proposes a non-parametric efficiency frontier analysis method based on the adaptive neural network technique for measuring efficiency as a complementary tool for the common techniques of the efficiency studies in the previous studies. The proposed computational method is able to find a stochastic frontier based on a set of input-output observational data and do not require explicit assumptions about the function structure of the stochastic frontier. In this algorithm, for calculating the efficiency scores, a similar approach to econometric methods has been used. Moreover, the effect of the return to scale of decision-making units (DMUs) on its efficiency is included and the unit used for the correction is selected by notice of its scale (under constant return to scale assumption). An example using real data is presented for illustrative purposes. In the application to the power generation sector of Iran, we find that the neural network provide more robust results and identifies more efficient units than the conventional methods since better performance patterns are explored. Moreover, principle component analysis (PCA) is used to verify the findings of the proposed algorithm

  18. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  19. Jets/MET Performance with the combination of Particle flow algorithm and SoftKiller

    CERN Document Server

    Yamamoto, Kohei

    2017-01-01

    The main purpose of my work is to study the performance of the combination of Particle flow algorithm(PFlow) and SoftKiller(SK), “PF+SK”. ATLAS experiment currently employes Topological clusters(Topo) for jet reconstruction, but we want to replace it with more effective one, PFlow. PFlow provides us with another method to reconstruct jets[1]. With this algorithm, we combine the energy deposits in calorimeters with the measurement in ID tracker. This strategy enables us to claim these consistent measurements in a detector come from same particles and avoid double counting. SK is a simple and effective way of suppressing pile-up[2]. This way, we divide rapidity-azimuthal plane into square patches and eliminate particles lower than the threshold "#$%, which is derived from each ",' so that the median of " density becomes zero. Practically, this is equal to gradually increasing "#$% till exactly half of patches becomes empty. Because there is no official calibration on PF+SK so far, we have t...

  20. A semi-active suspension control algorithm for vehicle comprehensive vertical dynamics performance

    Science.gov (United States)

    Nie, Shida; Zhuang, Ye; Liu, Weiping; Chen, Fan

    2017-08-01

    Comprehensive performance of the vehicle, including ride qualities and road-holding, is essentially of great value in practice. Many up-to-date semi-active control algorithms improve vehicle dynamics performance effectively. However, it is hard to improve comprehensive performance for the conflict between ride qualities and road-holding around the second-order resonance. Hence, a new control algorithm is proposed to achieve a good trade-off between ride qualities and road-holding. In this paper, the properties of the invariant points are analysed, which gives an insight into the performance conflicting around the second-order resonance. Based on it, a new control algorithm is proposed. The algorithm employs a novel frequency selector to balance suspension ride and handling performance by adopting a medium damping around the second-order resonance. The results of this study show that the proposed control algorithm could improve the performance of ride qualities and suspension working space up to 18.3% and 8.2%, respectively, with little loss of road-holding compared to the passive suspension. Consequently, the comprehensive performance can be improved by 6.6%. Hence, the proposed algorithm is of great potential to be implemented in practice.

  1. Synthesis of work-zone performance measures.

    Science.gov (United States)

    2013-09-01

    The main objective of this synthesis was to identify and summarize how agencies collect, analyze, and report different work-zone : traffic-performance measures, which include exposure, mobility, and safety measures. The researchers also examined comm...

  2. Measuring the performance of business incubators

    OpenAIRE

    VANDERSTRAETEN, Johanna; MATTHYSSENS, Paul; VAN WITTELOOSTUIJN, Arjen

    2012-01-01

    This paper focuses on incubator performance measurement. First, we report the findings of an extensive literature review. Both existing individual measures and more comprehensive measurement systems are discussed. This literature review shows that most incubator researchers and practitioners only use one or a few indicators for performance evaluation, and that existing measurement systems do not recognize the importance of short, medium and long-term results, do not always include an incubato...

  3. A simple biota removal algorithm for 35 GHz cloud radar measurements

    Science.gov (United States)

    Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas

    2018-03-01

    Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is

  4. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    Science.gov (United States)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate

  5. A string matching based algorithm for performance evaluation of ...

    Indian Academy of Sciences (India)

    Zanibbi et al (2011) have proposed performance metrics based on bipartite graphs at stroke level. ... bipartite graphs on which metrics based on hamming distances are defined. ...... Document Image Analysis for Libraries 320–331 ... Lee H J and Wang J S 1997 Design of a mathematical expression understanding system.

  6. A string matching based algorithm for performance evaluation of ...

    Indian Academy of Sciences (India)

    In this paper, we have addressed the problem of automated performance evaluation of Mathematical Expression (ME) recognition. Automated evaluation requires that recognition output and ground truth in some editable format like LaTeX, MathML, etc. have to be matched. But standard forms can have extraneous symbols ...

  7. General Video Game Evaluation Using Relative Algorithm Performance Profiles

    DEFF Research Database (Denmark)

    Nielsen, Thorbjørn; Barros, Gabriella; Togelius, Julian

    2015-01-01

    In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea throug...

  8. Algebraic and algorithmic frameworks for optimized quantum measurements

    DEFF Research Database (Denmark)

    Laghaout, Amine; Andersen, Ulrik Lund

    2015-01-01

    von Neumann projections are the main operations by which information can be extracted from the quantum to the classical realm. They are, however, static processes that do not adapt to the states they measure. Advances in the field of adaptive measurement have shown that this limitation can...... be overcome by "wrapping" the von Neumann projectors in a higher-dimensional circuit which exploits the interplay between measurement outcomes and measurement settings. Unfortunately, the design of adaptive measurement has often been ad hoc and setup specific. We shall here develop a unified framework...

  9. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2014-01-01

    Full Text Available Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  10. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    Science.gov (United States)

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  11. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Science.gov (United States)

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  12. The service of public services performance measurement

    DEFF Research Database (Denmark)

    Lystbæk, Christian Tang

    2014-01-01

    that performance measurement serves as “rituals of verification” which promotes the interests of political masters and their mistresses rather than public service. Another area of concern is the cost of performance measurement. Hood & Peters (2004:278) note that performance measurement is likely to “distract...... measurement suggests a range of contested and contradictory propositions. Its alleged benefits include public assurance, better functioning of supply markets for public services, and direct improvements of public services. But the literature also demonstrates the existence of significant concern about...... the actual impact, the costs and unintended consequences associated with performance measurement. This paper identifies the main rationales and rationalities in the scholarly discourse on public services performance measurement. It concludes with some suggestions on how to deal with the many rationales...

  13. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    Science.gov (United States)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  14. COMPANY PERFORMANCE MEASUREMENT AND REPORTING METHODS

    Directory of Open Access Journals (Sweden)

    Nicu Ioana Elena

    2012-12-01

    Full Text Available One of the priorities of economic research has been and remains the re-evaluation of the notion of performance and especially exploring and finding some indicators that would reflect as accurately as possible the subtleties of the economic entity. The main purpose of this paper is to highlight the main company performance measurement and reporting methods. Performance is a concept that raises many question marks concerning the most accurate or the best method of reporting the performance at the company level. The research methodology has aimed at studying the Romanian and foreign specialized literature dealing with the analyzed field, studying magazines specialized on company performance measurement. If the financial performance measurement indicators are considered to offer an accurate image of the situation of the company, the modern approach through non-financial indicators offers a new perspective upon performance measurement, which is based on simplicity. In conclusion, after the theoretical study, I have noticed that the methods of performance measurement, reporting and interpretation are various, the opinions regarding the best performance measurement methods are contradictive and the companies prefer resorting to financial indicators that still play a more important role in the consolidation of the company performance measurement than the non-financial indicators do.

  15. Performance of multiobjective computational intelligence algorithms for the routing and wavelength assignment problem

    Directory of Open Access Journals (Sweden)

    Jorge Patiño

    2016-01-01

    Full Text Available This paper presents an evaluation performance of computational intelligence algorithms based on the multiobjective theory for the solution of the Routing and Wavelength Assignment problem (RWA in optical networks. The study evaluates the Firefly Algorithm, the Differential Evolutionary Algorithm, the Simulated Annealing Algorithm and two versions of the Particle Swarm Optimization algorithm. The paper provides a description of the multiobjective algorithms; then, an evaluation based on the performance provided by the multiobjective algorithms versus mono-objective approaches when dealing with different traffic loads, different numberof wavelengths and wavelength conversion process over the NSFNet topology is presented. Simulation results show that monoobjective algorithms properly solve the RWA problem for low values of data traffic and low number of wavelengths. However, the multiobjective approaches adapt better to online traffic when the number of wavelengths available in the network increases as well as when wavelength conversion is implemented in the nodes.

  16. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    Science.gov (United States)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  17. Performance evaluation of image segmentation algorithms on microscopic image data

    Czech Academy of Sciences Publication Activity Database

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    Roč. 275, č. 1 (2015), s. 65-85 ISSN 0022-2720 R&D Projects: GA ČR GAP103/12/2211 Institutional support: RVO:67985556 Keywords : image segmentation * performance evaluation * microscopic images Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.136, year: 2015 http://library.utia.cas.cz/separaty/2014/ZOI/zitova-0434809-DOI.pdf

  18. Mapping the Conjugate Gradient Algorithm onto High Performance Heterogeneous Computers

    Science.gov (United States)

    2014-05-01

    Solution of sparse indefinite systems of linear equations. Society for Industrial and Applied Mathematis 12(4), 617 –629. Parker, M. ( 2009 ). Taking advantage...44 vii 11 LIST OF SYMBOLS, ABBREVIATIONS, AND NOMENCLATURE API Application Programming Interface ASIC Application Specific Integrated Circuit...FPGA designer, 1 16 2 thus, final implementations were nearly always performed using fixed-point or integer arithmetic (Parker 2009 ). With the recent

  19. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    International Nuclear Information System (INIS)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-01-01

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose 4 and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose 4 at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility

  20. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin, E-mail: khj.snuh@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Park, Chang Min, E-mail: cmpark@radiol.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Song, Yong Sub, E-mail: terasong@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Lee, Sang Min, E-mail: sangmin.lee.md@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Goo, Jin Mo, E-mail: jmgoo@plaza.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of)

    2014-05-15

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose{sup 4} and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose{sup 4} at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility.

  1. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (pvolumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. 2-Phase NSGA II: An Optimized Reward and Risk Measurements Algorithm in Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    Seyedeh Elham Eftekharian

    2017-11-01

    Full Text Available Portfolio optimization is a serious challenge for financial engineering and has pulled down special attention among investors. It has two objectives: to maximize the reward that is calculated by expected return and to minimize the risk. Variance has been considered as a risk measure. There are many constraints in the world that ultimately lead to a non–convex search space such as cardinality constraint. In conclusion, parametric quadratic programming could not be applied and it seems essential to apply multi-objective evolutionary algorithm (MOEA. In this paper, a new efficient multi-objective portfolio optimization algorithm called 2-phase NSGA II algorithm is developed and the results of this algorithm are compared with the NSGA II algorithm. It was found that 2-phase NSGA II significantly outperformed NSGA II algorithm.

  3. A Relative-Localization Algorithm Using Incomplete Pairwise Distance Measurements for Underwater Applications

    Directory of Open Access Journals (Sweden)

    Kae Y. Foo

    2010-01-01

    Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.

  4. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  5. Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Calyam, Prasad

    2014-09-15

    The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.

  6. Joint Analysis of Multiple Algorithms and Performance Measures

    NARCIS (Netherlands)

    de Campos, Cassio P.; Benavoli, Alessio

    There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art.

  7. Performance and development for the Inner Detector Trigger Algorithms at ATLAS

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for Run 2 starting in spring 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  8. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Penc, Ondrej

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. (paper)

  9. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  10. Algorithm to determine electrical submersible pump performance considering temperature changes for viscous crude oils

    Energy Technology Data Exchange (ETDEWEB)

    Valderrama, A. [Petroleos de Venezuela, S.A., Distrito Socialista Tecnologico (Venezuela); Valencia, F. [Petroleos de Venezuela, S.A., Instituto de Tecnologia Venezolana para el Petroleo (Venezuela)

    2011-07-01

    In the heavy oil industry, electrical submersible pumps (ESPs) are used to transfer energy to fluids through stages made up of one impeller and one diffuser. Since liquid temperature increases through the different stages, viscosity might change between the inlet and outlet of the pump, thus affecting performance. The aim of this research was to create an algorithm to determine ESPs' performance curves considering temperature changes through the stages. A computational algorithm was developed and then compared with data collected in a laboratory with a CG2900 ESP. Results confirmed that when the fluid's viscosity is affected by the temperature changes, the stages of multistage pump systems do not have the same performance. Thus the developed algorithm could help production engineers to take viscosity changes into account and optimize the ESP design. This study developed an algorithm to take into account the fluid viscosity changes through pump stages.

  11. {Performance of the ATLAS Inner Detector Trigger algorithms in pp collisions at 7TeV

    CERN Document Server

    Masik, Jiri; The ATLAS collaboration

    2011-01-01

    The ATLAS trigger performs online event selection in three stages. The Inner Detector information is used in the second (Level 2) and third (Event Filter) stages. Track reconstruction in the silicon detectors and transition radiation tracker contributes significantly to the rejection of uninteresting events while retaining a high signal efficiency. To achieve an overall trigger execution time of 40 ms per event, Level 2 tracking uses fast custom algorithms. The Event Filter tracking uses modified offline algorithms, with an overall execution time of 4s per event. Performance of the trigger tracking algorithms with data collected by ATLAS in 2011 is shown. The high efficiency and track quality of the trigger tracking algorithms for identification of physics signatures is presented. We also discuss the robustness of the reconstruction software with respect to the presence of multiple interactions per bunch crossing, an increasingly important feature for optimal performance moving towards the design luminosities...

  12. Performance of the ATLAS Inner Detector Trigger algorithms in pp collisions at 7TeV

    CERN Document Server

    Masik, Jiri; The ATLAS collaboration

    2011-01-01

    The ATLAS trigger performs online event selection in three stages. The Inner Detector information is used in the second (Level 2) and third (Event Filter) stages. Track reconstruction in the silicon detectors and transition radiation tracker contributes significantly to the rejection of uninteresting events while retaining a high signal efficiency. To achieve an overall trigger execution time of 40 ms per event, Level 2 tracking uses fast custom algorithms. The Event Filter tracking uses modified offline algorithms, with an overall execution time of 4s per event. Performance of the trigger tracking algorithms with data collected by ATLAS in 2011 is shown. The high efficiency and track quality of the trigger tracking algorithms for identification of physics signatures is presented. We also discuss the robustness of the reconstruction software with respect to the presence of multiple interactions per bunch crossing, an increasingly important feature for optimal performance moving towards the design luminosities...

  13. Probing optimal measurement configuration for optical scatterometry by the multi-objective genetic algorithm

    Science.gov (United States)

    Chen, Xiuguo; Gu, Honggang; Jiang, Hao; Zhang, Chuanwei; Liu, Shiyuan

    2018-04-01

    Measurement configuration optimization (MCO) is a ubiquitous and important issue in optical scatterometry, whose aim is to probe the optimal combination of measurement conditions, such as wavelength, incidence angle, azimuthal angle, and/or polarization directions, to achieve a higher measurement precision for a given measuring instrument. In this paper, the MCO problem is investigated and formulated as a multi-objective optimization problem, which is then solved by the multi-objective genetic algorithm (MOGA). The case study on the Mueller matrix scatterometry for the measurement of a Si grating verifies the feasibility of the MOGA in handling the MCO problem in optical scatterometry by making a comparison with the Monte Carlo simulations. Experiments performed at the achieved optimal measurement configuration also show good agreement between the measured and calculated best-fit Mueller matrix spectra. The proposed MCO method based on MOGA is expected to provide a more general and practical means to solve the MCO problem in the state-of-the-art optical scatterometry.

  14. A Critique of Health System Performance Measurement.

    Science.gov (United States)

    Lynch, Thomas

    2015-01-01

    Health system performance measurement is a ubiquitous phenomenon. Many authors have identified multiple methodological and substantive problems with performance measurement practices. Despite the validity of these criticisms and their cross-national character, the practice of health system performance measurement persists. Theodore Marmor suggests that performance measurement invokes an "incantatory response" wrapped within "linguistic muddle." In this article, I expand upon Marmor's insights using Pierre Bourdieu's theoretical framework to suggest that, far from an aberration, the "linguistic muddle" identified by Marmor is an indicator of a broad struggle about the representation and classification of public health services as a public good. I present a case study of performance measurement from Alberta, Canada, examining how this representational struggle occurs and what the stakes are. © The Author(s) 2015.

  15. Ridge Distance Estimation in Fingerprint Images: Algorithm and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Tian Jie

    2004-01-01

    Full Text Available It is important to estimate the ridge distance accurately, an intrinsic texture property of a fingerprint image. Up to now, only several articles have touched directly upon ridge distance estimation. Little has been published providing detailed evaluation of methods for ridge distance estimation, in particular, the traditional spectral analysis method applied in the frequency field. In this paper, a novel method on nonoverlap blocks, called the statistical method, is presented to estimate the ridge distance. Direct estimation ratio (DER and estimation accuracy (EA are defined and used as parameters along with time consumption (TC to evaluate performance of these two methods for ridge distance estimation. Based on comparison of performances of these two methods, a third hybrid method is developed to combine the merits of both methods. Experimental results indicate that DER is 44.7%, 63.8%, and 80.6%; EA is 84%, 93%, and 91%; and TC is , , and seconds, with the spectral analysis method, statistical method, and hybrid method, respectively.

  16. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  17. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  18. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  19. Muon Identification performance: hadron mis-Id measurements and RPC Muon selections

    CERN Document Server

    CMS Collaboration

    2014-01-01

    Pion, kaon, proton mis-identification probabilities as muons have been measured for different Muon ID algorithms. Results from two independent analyses are presented. The performance of a new muon ID algorithm based on matching of inner tracks with hits in muon RPC chambers is also presented.

  20. Does hospital financial performance measure up?

    Science.gov (United States)

    Cleverley, W O; Harvey, R K

    1992-05-01

    Comparisons are continuously being made between the financial performance, products and services, of the healthcare industry and those of non-healthcare industries. Several useful measures of financial performance--profitability, liquidity, financial risk, asset management and replacement, and debt capacity, are used by the authors to compare the financial performance of the hospital industry with that of the industrial, transportation and utility sectors. Hospitals exhibit weaknesses in several areas. Goals are suggested for each measure to bring hospitals closer to competitive levels.

  1. Success rate and entanglement measure in Grover's search algorithm for certain kinds of four qubit states

    International Nuclear Information System (INIS)

    Chamoli, Arti; Bhandari, C.M.

    2005-01-01

    Entanglement plays a crucial role in the efficacy of quantum algorithms. Whereas the role of entanglement is quite obvious and conspicuous in teleportation and superdense coding, it is not so distinct in other situations such as in search algorithm. The starting state in Grover's search algorithm is supposedly a uniform superposition state (not entangled) with a success probability around unity. An operational entanglement measure has been defined and investigated analytically for two qubit states [O. Biham, M.A. Neilsen, T. Osborne, Phys. Rev. A 65 (2002) 062312, Y. Shimoni, D. Shapira, O. Biham, Phys. Rev. A 69 (2004) 062303] seeking a relationship with the success rate of search algorithm. This Letter examines the success rate of search algorithm for various four-qubit states. Analytic expressions for the same have been worked out which can provide the success rate and entanglement measure for certain kinds of four qubit input states

  2. Financial gains and risks in pay-for-performance bonus algorithms.

    Science.gov (United States)

    Cromwell, Jerry; Drozd, Edward M; Smith, Kevin; Trisolini, Michael

    2007-01-01

    Considerable attention has been given to evidence-based process indicators associated with quality of care, while much less attention has been given to the structure and key parameters of the various pay-for-performance (P4P) bonus and penalty arrangements using such measures. In this article we develop a general model of quality payment arrangements and discuss the advantages and disadvantages of the key parameters. We then conduct simulation analyses of four general P4P payment algorithms by varying seven parameters, including indicator weights, indicator intercorrelation, degree of uncertainty regarding intervention effectiveness, and initial baseline rates. Bonuses averaged over several indicators appear insensitive to weighting, correlation, and the number of indicators. The bonuses are sensitive to disease manager perceptions of intervention effectiveness, facing challenging targets, and the use of actual-to-target quality levels versus rates of improvement over baseline.

  3. On music performance, theories, measurement en diversity

    NARCIS (Netherlands)

    Timmers, R.; Honing, H.J.

    2002-01-01

    Measurement of musical performances is of interest to studies in musicology, music psychology and music performance practice, but in general it has not been considered the main issue: when analyzing Western classical music, these disciplines usually focus on the score rather than the performance.

  4. Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data

    Science.gov (United States)

    Chierici, F.; Embriaco, D.; Morucci, S.

    2017-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.

  5. Development of a Behavioral Performance Measure

    Directory of Open Access Journals (Sweden)

    Marcelo Cabus Klotzle

    2012-09-01

    Full Text Available Since the fifties, several measures have been developed in order to measure the performance of investments or choices involving uncertain outcomes. Much of these measures are based on Expected Utility Theory, but since the nineties a number of measures have been proposed based on Non-Expected Utility Theory. Among the Theories of Non-Expected Utility highlights Prospect Theory, which is the foundation of Behavioral Finance. Based on this theory this study proposes a new performance measure in which are embedded loss aversion along with the likelihood of distortions in the choice of alternatives. A hypothetical example is presented in which various performance measures, including the new measure are compared. The results showed that the ordering of the assets varied depending on the performance measure adopted. According to what was expected, the new performance measure clearly has captured the distortion of probabilities and loss aversion of the decision maker, ie, those assets with the greatest negative deviations from the target were those who had the worst performance.

  6. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgindy, Tarek [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dobbs, Alex [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-03

    A framework for assessing the performance of short-term solar forecasting is presented in conjunction with a range of numerical results using global horizontal irradiation (GHI) from the open-source Surface Radiation Budget (SURFRAD) data network. A suite of popular machine learning algorithms is compared according to a set of statistically distinct metrics and benchmarked against the persistence-of-cloudiness forecast and a cloud motion forecast. Results show significant improvement compared to the benchmarks with trade-offs among the machine learning algorithms depending on the desired error metric. Training inputs include time series observations of GHI for a history of years, historical weather and atmospheric measurements, and corresponding date and time stamps such that training sensitivities might be inferred. Prediction outputs are GHI forecasts for 1, 2, 3, and 4 hours ahead of the issue time, and they are made for every month of the year for 7 locations. Photovoltaic power and energy outputs can then be made using the solar forecasts to better understand power system impacts.

  7. ATLAS High-Level Trigger Performance for Calorimeter-Based Algorithms in LHC Run-I

    CERN Document Server

    Mann, A; The ATLAS collaboration

    2013-01-01

    The ATLAS detector operated during the three years of the Run-I of the Large Hadron Collider collecting information on a large number of proton-proton events. One the most important results obtained so far is the discovery of one Higgs boson. More precise measurements of this particle must be performed as well as there are other very important physics topics still to be explored. One of the key components of the ATLAS detector is its trigger system. It is composed of three levels: one (called Level 1 - L1) built on custom hardware and the two others based on software algorithms - called Level 2 (L2) and Event Filter (EF) – altogether referred to as the ATLAS High Level Trigger. The ATLAS trigger is responsible for reducing almost 20 million of collisions per second produced by the accelerator to less than 1000. The L2 operates only in the regions tagged by the first hardware level as containing possible interesting physics while the EF operates in the full detector, normally using offline-like algorithms to...

  8. Performance of b tagging algorithms in proton-proton collisions at 13 TeV with Phase 1 CMS detector

    CERN Document Server

    CMS Collaboration

    2018-01-01

    Many measurements as well as searches for new physics beyond the standard model at the LHC rely on the efficient identification of heavy flavour jets, i.e. jets containing b or c hadrons. In this Detector Performance Summary, the performance of these algorithms is presented, based on proton-proton collision data recorded by the CMS experiment at 13 TeV. Expected performance of the heavy flavour identification algorithms with the upgraded tracker detector are presented. Correction factors for a different performance in data and simulation are evaluated in 41.9 fb-1 of collision data collected in 2017. Finally, the reconstruction of observables relevant for heavy flavour identification in 2018 data is studied.

  9. Performance measures for a dialysis setting.

    Science.gov (United States)

    Gu, Xiuzhu; Itoh, Kenji

    2018-03-01

    This study from Japan extracted performance measures for dialysis unit management and investigated their characteristics from professional views. Two surveys were conducted using self-administered questionnaires, in which dialysis managers/staff were asked to rate the usefulness of 44 performance indicators. A total of 255 managers and 2,097 staff responded. Eight performance measures were elicited from dialysis manager and staff responses: these were safety, operational efficiency, quality of working life, financial effectiveness, employee development, mortality, patient/employee satisfaction and patient-centred health care. These performance measures were almost compatible with those extracted in overall healthcare settings in a previous study. Internal reliability, content and construct validity of the performance measures for the dialysis setting were ensured to some extent. As a general trend, both dialysis managers and staff perceived performance measures as highly useful, especially for safety, mortality, operational efficiency and patient/employee satisfaction, but showed relatively low concerns for patient-centred health care and employee development. However, dialysis managers' usefulness perceptions were significantly higher than staff. Important guidelines for designing a holistic hospital/clinic management system were yielded. Performance measures must be balanced for outcomes and performance shaping factors (PSF); a common set of performance measures could be applied to all the healthcare settings, although performance indicators of each measure should be composed based on the application field and setting; in addition, sound causal relationships between PSF and outcome measures/indicators should be explored for further improvement. © 2017 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  10. Performance evaluation of recommendation algorithms on Internet of Things services

    Science.gov (United States)

    Mashal, Ibrahim; Alsaryrah, Osama; Chung, Tein-Yaw

    2016-06-01

    Internet of Things (IoT) is the next wave of industry revolution that will initiate many services, such as personal health care and green energy monitoring, which people may subscribe for their convenience. Recommending IoT services to the users based on objects they own will become very crucial for the success of IoT. In this work, we introduce the concept of service recommender systems in IoT by a formal model. As a first attempt in this direction, we have proposed a hyper-graph model for IoT recommender system in which each hyper-edge connects users, objects, and services. Next, we studied the usefulness of traditional recommendation schemes and their hybrid approaches on IoT service recommendation (IoTSRS) based on existing well known metrics. The preliminary results show that existing approaches perform reasonably well but further extension is required for IoTSRS. Several challenges were discussed to point out the direction of future development in IoTSR.

  11. Application of Machine Learning Algorithms for the Query Performance Prediction

    Directory of Open Access Journals (Sweden)

    MILICEVIC, M.

    2015-08-01

    Full Text Available This paper analyzes the relationship between the system load/throughput and the query response time in a real Online transaction processing (OLTP system environment. Although OLTP systems are characterized by short transactions, which normally entail high availability and consistent short response times, the need for operational reporting may jeopardize these objectives. We suggest a new approach to performance prediction for concurrent database workloads, based on the system state vector which consists of 36 attributes. There is no bias to the importance of certain attributes, but the machine learning methods are used to determine which attributes better describe the behavior of the particular database server and how to model that system. During the learning phase, the system's profile is created using multiple reference queries, which are selected to represent frequent business processes. The possibility of the accurate response time prediction may be a foundation for automated decision-making for database (DB query scheduling. Possible applications of the proposed method include adaptive resource allocation, quality of service (QoS management or real-time dynamic query scheduling (e.g. estimation of the optimal moment for a complex query execution.

  12. The ATLAS Trigger Algorithms Upgrade and Performance in Run-2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  13. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    Science.gov (United States)

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  14. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    Science.gov (United States)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  15. Towards integrating environmental performance in divisional performance measurement

    Directory of Open Access Journals (Sweden)

    Collins C Ngwakwe

    2014-08-01

    Full Text Available This paper suggests an integration of environmental performance measurement (EPM into conventional divisional financial performance measures as a catalyst to enhance managers’ drive toward cleaner production and sustainable development. The approach is conceptual and normative; and using a hypothetical firm, it suggests a model to integrate environmental performance measure as an ancillary to conventional divisional financial performance measures. Vroom’s motivation theory and other literature evidence indicate that corporate goals are achievable in an environment where managers’ efforts are recognised and thus rewarded. Consequently the paper suggests that environmentally motivated managers are important to propel corporate sustainability strategy toward desired corporate environmental governance and sustainable economic development. Thus this suggested approach modestly adds to existing environmental management accounting (EMA theory and literature. It is hoped that this paper may provide an agenda for further research toward a practical application of the suggested method in a firm.

  16. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    Science.gov (United States)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  17. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  18. Performance of direct and iterative algorithms on an optical systolic processor

    Science.gov (United States)

    Ghosh, A. K.; Casasent, D.; Neuman, C. P.

    1985-11-01

    The frequency-multiplexed optical linear algebra processor (OLAP) is treated in detail with attention to its performance in the solution of systems of linear algebraic equations (LAEs). General guidelines suitable for most OLAPs, including digital-optical processors, are advanced concerning system and component error source models, guidelines for appropriate use of direct and iterative algorithms, the dominant error sources, and the effect of multiple simultaneous error sources. Specific results are advanced on the quantitative performance of both direct and iterative algorithms in the solution of systems of LAEs and in the solution of nonlinear matrix equations. Acoustic attenuation is found to dominate iterative algorithms and detector noise to dominate direct algorithms. The effect of multiple spatial errors is found to be additive. A theoretical expression for the amount of acoustic attenuation allowed is advanced and verified. Simulations and experimental data are included.

  19. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  20. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    Energy Technology Data Exchange (ETDEWEB)

    2016-08-22

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.

  1. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  2. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    Science.gov (United States)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  3. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  4. Introduction to control system performance measurements

    CERN Document Server

    Garner, K C

    1968-01-01

    Introduction to Control System Performance Measurements presents the methods of dynamic measurements, specifically as they apply to control system and component testing. This book provides an introduction to the concepts of statistical measurement methods.Organized into nine chapters, this book begins with an overview of the applications of automatic control systems that pervade almost every area of activity ranging from servomechanisms to electrical power distribution networks. This text then discusses the common measurement transducer functions. Other chapters consider the basic wave

  5. Internal Performance Measurement Systems: Problems and Solutions

    DEFF Research Database (Denmark)

    Jakobsen, Morten; Mitchell, Falconer; Nørreklit, Hanne

    2010-01-01

    This article pursues two aims: to identify problems and dangers related to the operational use of internal performance measurement systems of the Balanced Scorecard (BSC) type and to provide some guidance on how performance measurement systems may be designed to overcome these problems....... The analysis uses and extends N rreklit's (2000) critique of the BSC by applying the concepts developed therein to contemporary research on the BSC and to the development of practice in performance measurement. The analysis is of relevance for many companies in the Asia-Pacific area as an increasing numbers...

  6. Experimental analysis of the performance of machine learning algorithms in the classification of navigation accident records

    Directory of Open Access Journals (Sweden)

    REIS, M V. S. de A.

    2017-06-01

    Full Text Available This paper aims to evaluate the use of machine learning techniques in a database of marine accidents. We analyzed and evaluated the main causes and types of marine accidents in the Northern Fluminense region. For this, machine learning techniques were used. The study showed that the modeling can be done in a satisfactory manner using different configurations of classification algorithms, varying the activation functions and training parameters. The SMO (Sequential Minimal Optimization algorithm showed the best performance result.

  7. Measurement uncertainty analysis techniques applied to PV performance measurements

    International Nuclear Information System (INIS)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results

  8. Telerobotic system performance measurement - Motivation and methods

    Science.gov (United States)

    Kondraske, George V.; Khoury, George J.

    1992-01-01

    A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.

  9. Performance measurement in transport sector analysis

    Directory of Open Access Journals (Sweden)

    M. Išoraitė

    2004-06-01

    Full Text Available The article analyses the following issues: 1. Performance measurement in literature. The performance measurement has an important role to play in the efficient and effective management of organizations. Kaplan and Johnson highlighted the failure of the financial measures to reflect changes in the competitive circumstances and strategies of modern organizations. Many authors have focused attention on how organizations can design more appropriate measurement systems. Based on literature, consultancy experience and action research, numerous processes have been developed that organizations can follow in order to design and implement systems. Many frameworks have been proposed that support these processes. The objective of such frameworks is to help organizations define a set of measures that reflect their objectives and assess their performance appropriately. 2. Transport sector performance and its impacts measuring. The purpose of transport measurement is to identify opportunities enhancing transport performance. Successful transport sector management requires a system to analyze its efficiency and effectiveness as well as plan interventions if transport sector performance needs improvement. Transport impacts must be measurable and monitorable so that the person responsible for the project intervention can decide when and how to influence them. Performance indicators provide a means to measure and monitor impacts. These indicators essentially reflect quantitative and qualitative aspects of impacts at given time and places. 3. Transport sector output and input. Transport sector inputs are the resources required to deliver transport sector outputs. Transport sector inputs are typically: human resources, particularly skilled resources (including specialists consulting inputs; technology processes such as equipment and work; and finance, both public and private. 4. Transport sector policy and institutional framework; 5. Cause – effect linkages; 6

  10. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality.

    Science.gov (United States)

    Braithwaite, Scott R; Giraud-Carrier, Christophe; West, Josh; Barnes, Michael D; Hanson, Carl Lee

    2016-05-16

    One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data.

  11. Activity measurement algorithm in solid radioactive waste clearance procedure

    International Nuclear Information System (INIS)

    Hinca, R.; Skala, L.

    2014-01-01

    Most of European metrology experts prefer in radiation wastes characterization process to use scaling factor method in combination with high definition gamma spectrometry rather than radionuclide vector method in combination with gross gamma activity measurement. The second method is actually used in Slovakian nuclear facilities. The international (IAEA) and Slovak national authorities (UVZ) recognize both approach but there is a fact, that high definition gamma spectrometry radionuclides more properly. (authors)

  12. The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    He Yan

    2016-01-01

    Full Text Available The particle swarm optimization (PSO is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.

  13. Investigating the performance of neural network backpropagation algorithms for TEC estimations using South African GPS data

    Science.gov (United States)

    Habarulema, J. B.; McKinnell, L.-A.

    2012-05-01

    In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.

  14. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  15. FPGA-based real-time phase measuring profilometry algorithm design and implementation

    Science.gov (United States)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng

    2016-11-01

    Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.

  16. Road weather management performance measures : 2012 update.

    Science.gov (United States)

    2013-08-01

    In 2007, the Road Weather Management Program (RWMP) conducted a study with stakeholders from the transportation and meteorological communities to define eleven performance measures that would enable the Federal Highway Administration (FHWA) to determ...

  17. Performance measures for metropolitan planning organizations.

    Science.gov (United States)

    2012-04-01

    Performance measurement is a topic of increasing importance to transportation agencies, as issues with : funding shortfalls and concerns about transportation system efficiency lead to a shift in how transportation : decision making is carried out. In...

  18. Smart city performance measurement framework. CITYkeys

    NARCIS (Netherlands)

    Airaksinen, M.; Seppa, I.P.; Huovilla, A.; Neumann, H.M.; Iglar, B.; Bosch, P.R.

    2017-01-01

    This paper presents a holistic performance measurement framework for harmonized and transparent monitoring and comparability of the European cities activities during the implementation of Smart City solutions. The work methodology was based on extensive collaboration and communication with European

  19. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    Science.gov (United States)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2

  20. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    Science.gov (United States)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2

  1. Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency

    KAUST Repository

    Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack

    2011-01-01

    This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine

  2. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  3. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...

  4. A new algorithm for reducing the workload of experts in performing systematic reviews.

    Science.gov (United States)

    Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter

    2010-01-01

    To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.

  5. Work zone performance measures pilot test.

    Science.gov (United States)

    2011-04-01

    Currently, a well-defined and validated set of metrics to use in monitoring work zone performance do not : exist. This pilot test was conducted to assist state DOTs in identifying what work zone performance : measures can and should be targeted, what...

  6. ASUPT Automated Objective Performance Measurement System.

    Science.gov (United States)

    Waag, Wayne L.; And Others

    To realize its full research potential, a need exists for the development of an automated objective pilot performance evaluation system for use in the Advanced Simulation in Undergraduate Pilot Training (ASUPT) facility. The present report documents the approach taken for the development of performance measures and also presents data collected…

  7. Environmental Measurements Laboratory 2002 Unit Performance Plan

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-10-01

    This EML Unit Performance Plan provides the key goals and performance measures for FY 2002 and continuing to FY 2003. The purpose of the Plan is to inform EML's stakeholders and customers of the Laboratory's products and services, and its accomplishments and future challenges. Also incorporated in the Unit Performance Plan is EML's Communication Plan for FY 2002.

  8. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    Science.gov (United States)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  9. Measuring performance in virtual reality phacoemulsification surgery

    Science.gov (United States)

    Söderberg, Per; Laurell, Carl-Gustaf; Simawi, Wamidh; Skarman, Eva; Nordh, Leif; Nordqvist, Per

    2008-02-01

    We have developed a virtual reality (VR) simulator for phacoemulsification surgery. The current work aimed at developing a relative performance index that characterizes the performance of an individual trainee. We recorded measurements of 28 response variables during three iterated surgical sessions in 9 experienced cataract surgeons, separately for the sculpting phase and the evacuation phase of phacoemulsification surgery and compared their outcome to that of a reference group of naive trainees. We defined an individual overall performance index, an individual class specific performance index and an individual variable specific performance index. We found that on an average the experienced surgeons performed at a lower level than a reference group of naive trainees but that this was particularly attributed to a few surgeons. When their overall performance index was further analyzed as class specific performance index and variable specific performance index it was found that the low level performance was attributed to a behavior that is acceptable for an experienced surgeon but not for a naive trainee. It was concluded that relative performance indices should use a reference group that corresponds to the measured individual since the definition of optimal surgery may vary among trainee groups depending on their level of experience.

  10. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    International Nuclear Information System (INIS)

    Mantini, D; II, K E Hild; Alleva, G; Comani, S

    2006-01-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times

  11. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    Science.gov (United States)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  12. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  13. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  14. The Optimal Wavelengths for Light Absorption Spectroscopy Measurements Based on Genetic Algorithm-Particle Swarm Optimization

    Science.gov (United States)

    Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng

    2018-03-01

    To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.

  15. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    Science.gov (United States)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  16. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  17. Performance Comparison of Different System Identification Algorithms for FACET and ATF2

    CERN Document Server

    Pfingstner, J; Schulte, D

    2013-01-01

    Good system knowledge is an essential ingredient for the operation of modern accelerator facilities. For example, beam-based alignment algorithms and orbit feedbacks rely strongly on a precise measurement of the orbit response matrix. The quality of the measurement of this matrix can be improved over time by statistically combining the effects of small system excitations with the help of system identification algorithms. These small excitations can be applied in a parasitic mode without stopping the accelerator operation (on-line). In this work, different system identification algorithms are used in simulation studies for the response matrix measurement at ATF2. The results for ATF2 are finally compared with the results for FACET, latter originating from an earlier work.

  18. Reconsidering the measurement of ancillary service performance.

    Science.gov (United States)

    Griffin, D T; Rauscher, J A

    1987-08-01

    Prospective payment reimbursement systems have forced hospitals to review their costs more carefully. The result of the increased emphasis on costs is that many hospitals use costs, rather than margin, to judge the performance of ancillary services. However, arbitrary selection of performance measures for ancillary services can result in managerial decisions contrary to hospital objectives. Managerial accounting systems provide models which assist in the development of performance measures for ancillary services. Selection of appropriate performance measures provides managers with the incentive to pursue goals congruent with those of the hospital overall. This article reviews the design and implementation of managerial accounting systems, and considers the impact of prospective payment systems and proposed changes in capital reimbursement on this process.

  19. Measurement Of Shariah Stock Performance Using Risk Adjusted Performance

    Directory of Open Access Journals (Sweden)

    Zuhairan Y Yunan

    2015-03-01

    Full Text Available The aim of this research is to analyze the shariah stock performance using risk adjusted performance method. There are three parameters to measure the stock performance i.e. Sharpe, Treynor, and Jensen. This performance’s measurements calculate the return and risk factor from shariah stocks. The data that used on this research is using the data of stocks at Jakarta Islamic Index. Sampling method that used on this paper is purposive sampling. This research is using ten companies as a sample. The result shows that from three parameters, the stock that have a best performance are AALI, ANTM, ASII, CPIN, INDF, KLBF, LSIP, and UNTR.DOI: 10.15408/aiq.v7i1.1364

  20. Conventional QT Variability Measurement vs. Template Matching Techniques: Comparison of Performance Using Simulated and Real ECG

    Science.gov (United States)

    Baumert, Mathias; Starc, Vito; Porta, Alberto

    2012-01-01

    Increased beat-to-beat variability in the QT interval (QTV) of ECG has been associated with increased risk for sudden cardiac death, but its measurement is technically challenging and currently not standardized. The aim of this study was to investigate the performance of commonly used beat-to-beat QT interval measurement algorithms. Three different methods (conventional, template stretching and template time shifting) were subjected to simulated data featuring typical ECG recording issues (broadband noise, baseline wander, amplitude modulation) and real short-term ECG of patients before and after infusion of sotalol, a QT interval prolonging drug. Among the three algorithms, the conventional algorithm was most susceptible to noise whereas the template time shifting algorithm showed superior overall performance on simulated and real ECG. None of the algorithms was able to detect increased beat-to-beat QT interval variability after sotalol infusion despite marked prolongation of the average QT interval. The QTV estimates of all three algorithms were inversely correlated with the amplitude of the T wave. In conclusion, template matching algorithms, in particular the time shifting algorithm, are recommended for beat-to-beat variability measurement of QT interval in body surface ECG. Recording noise, T wave amplitude and the beat-rejection strategy are important factors of QTV measurement and require further investigation. PMID:22860030

  1. Conventional QT variability measurement vs. template matching techniques: comparison of performance using simulated and real ECG.

    Directory of Open Access Journals (Sweden)

    Mathias Baumert

    Full Text Available Increased beat-to-beat variability in the QT interval (QTV of ECG has been associated with increased risk for sudden cardiac death, but its measurement is technically challenging and currently not standardized. The aim of this study was to investigate the performance of commonly used beat-to-beat QT interval measurement algorithms. Three different methods (conventional, template stretching and template time shifting were subjected to simulated data featuring typical ECG recording issues (broadband noise, baseline wander, amplitude modulation and real short-term ECG of patients before and after infusion of sotalol, a QT interval prolonging drug. Among the three algorithms, the conventional algorithm was most susceptible to noise whereas the template time shifting algorithm showed superior overall performance on simulated and real ECG. None of the algorithms was able to detect increased beat-to-beat QT interval variability after sotalol infusion despite marked prolongation of the average QT interval. The QTV estimates of all three algorithms were inversely correlated with the amplitude of the T wave. In conclusion, template matching algorithms, in particular the time shifting algorithm, are recommended for beat-to-beat variability measurement of QT interval in body surface ECG. Recording noise, T wave amplitude and the beat-rejection strategy are important factors of QTV measurement and require further investigation.

  2. Developing Human Performance Measures (PSAM8)

    International Nuclear Information System (INIS)

    Jeffrey C. Joe

    2006-01-01

    Through the reactor oversight process (ROP), the U.S. Nuclear Regulatory Commission (NRC) monitors the performance of utilities licensed to operate nuclear power plants. The process is designed to assure public health and safety by providing reasonable assurance that licensees are meeting the cornerstones of safety and designated crosscutting elements. The reactor inspection program, together with performance indicators (PIs), and enforcement activities form the basis for the NRC's risk-informed, performance based regulatory framework. While human performance is a key component in the safe operation of nuclear power plants and is a designated cross-cutting element of the ROP, there is currently no direct inspection or performance indicator for assessing human performance. Rather, when human performance is identified as a substantive cross cutting element in any 1 of 3 categories (resources, organizational or personnel), it is then evaluated for common themes to determine if follow-up actions are warranted. However, variability in human performance occurs from day to day, across activities that vary in complexity, and workgroups, contributing to the uncertainty in the outcomes of performance. While some variability in human performance may be random, much of the variability may be attributed to factors that are not currently assessed. There is a need to identify and assess aspects of human performance that relate to plant safety and to develop measures that can be used to successfully assure licensee performance and indicate when additional investigation may be required. This paper presents research that establishes a technical basis for developing human performance measures. In particular, we discuss: (1) how historical data already gives some indication of connection between human performance and overall plant performance, (2) how industry led efforts to measure and model human performance and organizational factors could serve as a data source and basis for a

  3. Performance Test of Core Protection and Monitoring Algorithm with DLL for SMART Simulator Implementation

    International Nuclear Information System (INIS)

    Koo, Bonseung; Hwang, Daehyun; Kim, Keungkoo

    2014-01-01

    A multi-purpose best-estimate simulator for SMART is being established, which is intended to be used as a tool to evaluate the impacts of design changes on the safety performance, and to improve and/or optimize the operating procedure of SMART. In keeping with these intentions, a real-time model of the digital core protection and monitoring systems was developed and the real-time performance of the models was verified for various simulation scenarios. In this paper, a performance test of the core protection and monitoring algorithm with a DLL file for the SMART simulator implementation was performed. A DLL file of the simulator application code was made and several real-time evaluation tests were conducted for the steady-state and transient conditions with simulated system variables. A performance test of the core protection and monitoring algorithms for the SMART simulator was performed. A DLL file of the simulator version code was made and several real-time evaluation tests were conducted for various scenarios with a DLL file and simulated system variables. The results of all test cases showed good agreement with the reference results and some features caused by algorithm change were properly reflected to the DLL results. Therefore, it was concluded that the SCOPS S SIM and SCOMS S SIM algorithms and calculational capabilities are appropriate for the core protection and monitoring program in the SMART simulator

  4. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  5. Measurement of the inclusive jet cross section using the midpoint algorithm in Run II at CDF

    Energy Technology Data Exchange (ETDEWEB)

    Group, Robert Craig [Univ. of Florida, Gainesville, FL (United States)

    2006-01-01

    A measurement is presented of the inclusive jet cross section using the Midpoint jet clustering algorithm in five different rapidity regions. This is the first analysis which measures the inclusive jet cross section using the Midpoint algorithm in the forward region of the detector. The measurement is based on more than 1 fb-1 of integrated luminosity of Run II data taken by the CDF experiment at the Fermi National Accelerator Laboratory. The results are consistent with the predictions of perturbative quantum chromodynamics.

  6. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  7. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  8. The Nonlocal Sparse Reconstruction Algorithm by Similarity Measurement with Shearlet Feature Vector

    Directory of Open Access Journals (Sweden)

    Wu Qidi

    2014-01-01

    Full Text Available Due to the limited accuracy of conventional methods with image restoration, the paper supplied a nonlocal sparsity reconstruction algorithm with similarity measurement. To improve the performance of restoration results, we proposed two schemes to dictionary learning and sparse coding, respectively. In the part of the dictionary learning, we measured the similarity between patches from degraded image by constructing the Shearlet feature vector. Besides, we classified the patches into different classes with similarity and trained the cluster dictionary for each class, by cascading which we could gain the universal dictionary. In the part of sparse coding, we proposed a novel optimal objective function with the coding residual item, which can suppress the residual between the estimate coding and true sparse coding. Additionally, we show the derivation of self-adaptive regularization parameter in optimization under the Bayesian framework, which can make the performance better. It can be indicated from the experimental results that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the coding residual suppression, the proposed method shows advantage both on visual perception and PSNR compared to the conventional methods.

  9. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  10. Performance evaluation of the ORNL multi-elemental XRF analysis algorithms

    Energy Technology Data Exchange (ETDEWEB)

    McElroy, Robert Dennis [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    Hybrid K-Edge Densitometer (HKED) systems integrate both K-Edge Densitometry (KED) and X-Ray Fluorescence (XRF) analyses to provide accurate rapid, assay results of the uranium and plutonium content of dissolver solution samples from nuclear fuel reprocessing facilities. Introduced for international safeguards applications in the late 1980s, the XRF component of the hybrid analyses is limited to quantification of U and Pu over a narrow range of U:Pu concentration ratios in the vicinity of ≈100. The analysis was further limited regarding the presence of minor actinide components where only a single minor actinide (typically Am) is included in the analysis and then only treated as an interference. The evolving nuclear fuel cycle has created the need to assay more complex dissolver solutions where uranium may no longer be the dominant actinide in the solution and the concentrations of the so called minor actinides (e.g., Th, Np, Am, and Cm) are sufficiently high that they can no longer be treated as impurities and ignored. Extension of the traditional HKED Region of Interest (ROI) based analysis to include these additional actinides is not possible due to the increased complexity of the XRF spectra. Oak Ridge National Laboratory (ORNL) has developed a spectral fitting approach to the HKED XRF measurement with an enhanced algorithm set to accommodate these complex XRF spectra. This report provides a summary of the spectral fitting methodology and examines the performance of these algorithms using data obtained from the ORNL HKED system, as well as data provided by the International Atomic Energy Agency (IAEA) on actual dissolver solutions.

  11. Ambulatory care registered nurse performance measurement.

    Science.gov (United States)

    Swan, Beth Ann; Haas, Sheila A; Chow, Marilyn

    2010-01-01

    On March 1-2, 2010, a state-of-the-science invitational conference titled "Ambulatory Care Registered Nurse Performance Measurement" was held to focus on measuring quality at the RN provider level in ambulatory care. The conference was devoted to ambulatory care RN performance measurement and quality of health care. The specific emphasis was on formulating a research agenda and developing a strategy to study the testable components of the RN role related to care coordination and care transitions, improving patient outcomes, decreasing health care costs, and promoting sustainable system change. The objectives were achieved through presentations and discussion among expert inter-professional participants from nursing, public health, managed care, research, practice, and policy. Conference speakers identified priority areas for a unified practice, policy, and research agenda. Crucial elements of the strategic dialogue focused on issues and implications for nursing and inter-professional practice, quality, and pay-for-performance.

  12. A comparison of thermal algorithms of fuel rod performance code systems

    International Nuclear Information System (INIS)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C.

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance

  13. A comparison of thermal algorithms of fuel rod performance code systems

    Energy Technology Data Exchange (ETDEWEB)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance.

  14. HIV misdiagnosis in sub-Saharan Africa: performance of diagnostic algorithms at six testing sites

    Science.gov (United States)

    Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng’ang’a, Anne; Andre, Bita; Zahinda, Jean-Paul BN; Fransen, Katrien; Page, Anne-Laure

    2017-01-01

    Abstract Introduction: We evaluated the diagnostic accuracy of HIV testing algorithms at six programmes in five sub-Saharan African countries. Methods: In this prospective multisite diagnostic evaluation study (Conakry, Guinea; Kitgum, Uganda; Arua, Uganda; Homa Bay, Kenya; Doula, Cameroun and Baraka, Democratic Republic of Congo), samples from clients (greater than equal to five years of age) testing for HIV were collected and compared to a state-of-the-art algorithm from the AIDS reference laboratory at the Institute of Tropical Medicine, Belgium. The reference algorithm consisted of an enzyme-linked immuno-sorbent assay, a line-immunoassay, a single antigen-enzyme immunoassay and a DNA polymerase chain reaction test. Results: Between August 2011 and January 2015, over 14,000 clients were tested for HIV at 6 HIV counselling and testing sites. Of those, 2786 (median age: 30; 38.1% males) were included in the study. Sensitivity of the testing algorithms ranged from 89.5% in Arua to 100% in Douala and Conakry, while specificity ranged from 98.3% in Doula to 100% in Conakry. Overall, 24 (0.9%) clients, and as many as 8 per site (1.7%), were misdiagnosed, with 16 false-positive and 8 false-negative results. Six false-negative specimens were retested with the on-site algorithm on the same sample and were found to be positive. Conversely, 13 false-positive specimens were retested: 8 remained false-positive with the on-site algorithm. Conclusions: The performance of algorithms at several sites failed to meet expectations and thresholds set by the World Health Organization, with unacceptably high rates of false results. Alongside the careful selection of rapid diagnostic tests and the validation of algorithms, strictly observing correct procedures can reduce the risk of false results. In the meantime, to identify false-positive diagnoses at initial testing, patients should be retested upon initiating antiretroviral therapy. PMID:28691437

  15. Procedure to Measure Indoor Lighting Energy Performance

    Energy Technology Data Exchange (ETDEWEB)

    Deru, M.; Blair, N.; Torcellini, P.

    2005-10-01

    This document provides standard definitions of performance metrics and methods to determine them for the energy performance of building interior lighting systems. It can be used for existing buildings and for proposed buildings. The primary users for whom these documents are intended are building energy analysts and technicians who design, install, and operate data acquisition systems, and who analyze and report building energy performance data. Typical results from the use of this procedure are the monthly and annual energy used for lighting, energy savings from occupancy or daylighting controls, and the percent of the total building energy use that is used by the lighting system. The document is not specifically intended for retrofit applications. However, it does complement Measurement and Verification protocols that do not provide detailed performance metrics or measurement procedures.

  16. Performance measures for transform data coding.

    Science.gov (United States)

    Pearl, J.; Andrews, H. C.; Pratt, W. K.

    1972-01-01

    This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.

  17. Performance in population models for count data, part II: a new SAEM algorithm

    Science.gov (United States)

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  18. Retrieval of Aerosol Microphysical Properties from AERONET Photo-Polarimetric Measurements. 2: A New Research Algorithm and Case Demonstration

    Science.gov (United States)

    Xu, Xiaoguang; Wang, Jun; Zeng, Jing; Spurr, Robert; Liu, Xiong; Dubovik, Oleg; Li, Li; Li, Zhengqiang; Mishchenko, Michael I.; Siniuk, Aliaksandr; hide

    2015-01-01

    A new research algorithm is presented here as the second part of a two-part study to retrieve aerosol microphysical properties from the multispectral and multiangular photopolarimetric measurements taken by Aerosol Robotic Network's (AERONET's) new-generation Sun photometer. The algorithm uses an advanced UNified and Linearized Vector Radiative Transfer Model and incorporates a statistical optimization approach.While the new algorithmhas heritage from AERONET operational inversion algorithm in constraining a priori and retrieval smoothness, it has two new features. First, the new algorithmretrieves the effective radius, effective variance, and total volume of aerosols associated with a continuous bimodal particle size distribution (PSD) function, while the AERONET operational algorithm retrieves aerosol volume over 22 size bins. Second, our algorithm retrieves complex refractive indices for both fine and coarsemodes,while the AERONET operational algorithm assumes a size-independent aerosol refractive index. Mode-resolved refractive indices can improve the estimate of the single-scattering albedo (SSA) for each aerosol mode and thus facilitate the validation of satellite products and chemistry transport models. We applied the algorithm to a suite of real cases over Beijing_RADI site and found that our retrievals are overall consistent with AERONET operational inversions but can offer mode-resolved refractive index and SSA with acceptable accuracy for the aerosol composed by spherical particles. Along with the retrieval using both radiance and polarization, we also performed radiance-only retrieval to demonstrate the improvements by adding polarization in the inversion. Contrast analysis indicates that with polarization, retrieval error can be reduced by over 50% in PSD parameters, 10-30% in the refractive index, and 10-40% in SSA, which is consistent with theoretical analysis presented in the companion paper of this two-part study.

  19. A Study on the Enhanced Best Performance Algorithm for the Just-in-Time Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sivashan Chetty

    2015-01-01

    Full Text Available The Just-In-Time (JIT scheduling problem is an important subject of study. It essentially constitutes the problem of scheduling critical business resources in an attempt to optimize given business objectives. This problem is NP-Hard in nature, hence requiring efficient solution techniques. To solve the JIT scheduling problem presented in this study, a new local search metaheuristic algorithm, namely, the enhanced Best Performance Algorithm (eBPA, is introduced. This is part of the initial study of the algorithm for scheduling problems. The current problem setting is the allocation of a large number of jobs required to be scheduled on multiple and identical machines which run in parallel. The due date of a job is characterized by a window frame of time, rather than a specific point in time. The performance of the eBPA is compared against Tabu Search (TS and Simulated Annealing (SA. SA and TS are well-known local search metaheuristic algorithms. The results show the potential of the eBPA as a metaheuristic algorithm.

  20. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  1. Performance of humans vs. exploration algorithms on the Tower of London Test.

    Directory of Open Access Journals (Sweden)

    Eric Fimbel

    Full Text Available The Tower of London Test (TOL used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves, healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test.

  2. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and Other Meteorological Measurements

    Science.gov (United States)

    Schultz, Christopher J.; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2012-01-01

    The lightning jump algorithm has a robust history in correlating upward trends in lightning to severe and hazardous weather occurrence. The algorithm uses the correlation between the physical principles that govern an updraft's ability to produce microphysical and kinematic conditions conducive for electrification and its role in the development of severe weather conditions. Recent work has demonstrated that the lightning jump algorithm concept holds significant promise in the operational realm, aiding in the identification of thunderstorms that have potential to produce severe or hazardous weather. However, a large amount of work still needs to be completed in spite of these positive results. The total lightning jump algorithm is not a stand-alone concept that can be used independent of other meteorological measurements, parameters, and techniques. For example, the algorithm is highly dependent upon thunderstorm tracking to build lightning histories on convective cells. Current tracking methods show that thunderstorm cell tracking is most reliable and cell histories are most accurate when radar information is incorporated with lightning data. In the absence of radar data, the cell tracking is a bit less reliable but the value added by the lightning information is much greater. For optimal application, the algorithm should be integrated with other measurements that assess storm scale properties (e.g., satellite, radar). Therefore, the recent focus of this research effort has been assessing the lightning jump's relation to thunderstorm tracking, meteorological parameters, and its potential uses in operational meteorology. Furthermore, the algorithm must be tailored for the optically-based GOES-R Geostationary Lightning Mapper (GLM), as what has been observed using Very High Frequency Lightning Mapping Array (VHF LMA) measurements will not exactly translate to what will be observed by GLM due to resolution and other instrument differences. Herein, we present some of

  3. Algoritmi selektivnog šifrovanja - pregled sa ocenom performansi / Selective encryption algorithms: Overview with performance evaluation

    Directory of Open Access Journals (Sweden)

    Boriša Ž. Jovanović

    2010-10-01

    name says, it consists of encrypting only a subset of the data. The aim of selective encryption is to reduce the amount of data to encrypt while preserving a sufficient level of security. Theoretical foundation of selective encryption The first theoretical foundation of selective encryption was given indirectly by Claude Elwood Shannon in his work about communication theory of secrecy systems. It is well known that statistics for image and video data differ much from classical text data. Indeed, image and video data are strongly correlated and have strong spatial/temporal redundancy. Evaluation criteria for selective encryption algorithm performance evaluation We need to define a set of evaluation criteria that will help evaluating and comparing selective encryption algorithms. - Tunability - Visual degradation - Cryptographic security - Encryption ratio - Compression friendliness - Format compliance - Error tolerance Classification of selective encryption algorithms One possible classification of selective encryption algorithms is relative to when encryption is performed with respect to compression. This classification is adequate since it has intrinsic consequences on selective encryption algorithms behavior. We consider three classes of algorithms as follows: - Precompression - Incompression - Postcompression Overview of selective encryption algorithms In accordance with their precedently defined classification, selective encryption algorithms were compared, briefly described with advantages and disadvantages and their quality was assessed. Applications Selective encryption mechanisms became more and more important and can be applied in many different areas. Some potential application areas of this mechanism are: - Monitoring encrypted content - PDAs (PDA - Personal Digital Assistant, mobile phones, and other mobile terminals - Multiple encryptions - Transcodability/scalability of encrypted content Conclusion As we can see through foregoing analysis, we can notice

  4. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools.

    Science.gov (United States)

    Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte

    2016-01-01

    The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. In the spring of 2014, the students' dean commissioned the "core group" for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and - in analogy to impact factors and third party grants - a ranking among faculty. Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.

  5. An algorithm for calculating exam quality as a basis for performance-based allocation of funds at medical schools

    Directory of Open Access Journals (Sweden)

    Kirschstein, Timo

    2016-05-01

    Full Text Available Objective: The amendment of the Medical Licensing Act (ÄAppO in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality.Methods: In the spring of 2014, the students‘ dean commissioned the „core group“ for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. Results: This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and – in analogy to impact factors and third party grants – a ranking among faculty. Conclusion: Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.

  6. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    Science.gov (United States)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  7. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...

  8. CoSMOS: Performance of Kurtosis Algorithm for Radio Frequency Interference Detection and Mitigation

    DEFF Research Database (Denmark)

    Misra, Sidharth; Kristensen, Steen Savstrup; Skou, Niels

    2007-01-01

    The performance of a previously developed algorithm for Radio Frequency Interference (RFI) detection and mitigation is experimentally evaluated. Results obtained from CoSMOS, an airborne campaign using a fully polarimetric L-band radiometer are analyzed for this purpose. Data is collected using two...

  9. Drug Safety Monitoring in Children: Performance of Signal Detection Algorithms and Impact of Age Stratification

    NARCIS (Netherlands)

    O.U. Osokogu (Osemeke); C. Dodd (Caitlin); A.C. Pacurariu (Alexandra C.); F. Kaguelidou (Florentia); D.M. Weibel (Daniel); M.C.J.M. Sturkenboom (Miriam)

    2016-01-01

    textabstractIntroduction: Spontaneous reports of suspected adverse drug reactions (ADRs) can be analyzed to yield additional drug safety evidence for the pediatric population. Signal detection algorithms (SDAs) are required for these analyses; however, the performance of SDAs in the pediatric

  10. Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation

    International Nuclear Information System (INIS)

    Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.

    2013-01-01

    Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7

  11. The Aviation Performance Measuring System (APMS): An Integrated Suite of Tools for Measuring Performance and Safety

    Science.gov (United States)

    Statler, Irving C.; Connor, Mary M. (Technical Monitor)

    1998-01-01

    This is a report of work in progress. In it, I summarize the status of the research and development of the Aviation Performance Measuring System (APMS) for managing, processing, and analyzing digital flight-recorded data, The objectives of the NASA-FAA APMS research project are to establish a sound scientific and technological basis for flight-data analysis, to define an open and flexible architecture for flight-data analysis systems, and to articulate guidelines for a standardized database structure on which to continue to build future flight-data-analysis extensions. APMS offers to the air transport community an open, voluntary standard for flight-data-analysis software; a standard that will help to ensure suitable functionality and data interchangeability among competing software programs. APMS will develop and document the methodologies, algorithms, and procedures for data management and analyses to enable users to easily interpret the implications regarding safety and efficiency of operations. APMS does not entail the implementation of a nationwide flight-data-collection system. It is intended to provide technical tools to ease the large-scale implementation of flight-data analyses at both the air-carrier and the national-airspace levels in support of their Flight Operations and Quality Assurance (FOQA) Programs and Advanced Qualifications Programs (AQP). APMS cannot meet its objectives unless it develops tools that go substantially beyond the capabilities of the current commercially available software and supporting analytic methods that are mainly designed to count special events. These existing capabilities, while of proven value, were created primarily with the needs-of aircrews in mind. APMS tools must serve the needs of the government and air carriers, as well as aircrews, to fully support the FOQA and AQP programs. They must be able to derive knowledge not only through the analysis of single flights (special-event detection), but also through

  12. Corporate Social Performance: From Output Measurement to Impact Measurement

    NARCIS (Netherlands)

    K.E.H. Maas (Karen)

    2009-01-01

    textabstractAll organisations have social, environmental and economic impacts that effect people, their communities and the natural environment. Impacts include intended as well as unintended effects and negative as well as positive effects. Current practice in performance measurement tends to focus

  13. Application of data mining in performance measures

    Science.gov (United States)

    Chan, Michael F. S.; Chung, Walter W.; Wong, Tai Sun

    2001-10-01

    This paper proposes a structured framework for exploiting data mining application for performance measures. The context is set in an airline company is illustrated for the use of such framework. The framework takes in consideration of how a knowledge worker interacts with performance information at the enterprise level to support them to make informed decision in managing the effectiveness of operations. A case study of applying data mining technology for performance data in an airline company is illustrated. The use of performance measures is specifically applied to assist in the aircraft delay management process. The increasingly dispersed and complex operations of airline operation put much strain on the part of knowledge worker in using search, acquiring and analyzing information to manage performance. One major problem faced with knowledge workers is the identification of root causes of performance deficiency. The large amount of factors involved in the analyze the root causes can be time consuming and the objective of applying data mining technology is to reduce the time and resources needed for such process. The increasing market competition for better performance management in various industries gives rises to need of the intelligent use of data. Because of this, the framework proposed here is very much generalizable to industries such as manufacturing. It could assist knowledge workers who are constantly looking for ways to improve operation effectiveness through new initiatives and the effort is required to be quickly done to gain competitive advantage in the marketplace.

  14. Frequency Control Performance Measurement and Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Illian, Howard F.

    2010-12-20

    Frequency control is an essential requirement of reliable electric power system operations. Determination of frequency control depends on frequency measurement and the practices based on these measurements that dictate acceptable frequency management. This report chronicles the evolution of these measurements and practices. As technology progresses from analog to digital for calculation, communication, and control, the technical basis for frequency control measurement and practices to determine acceptable performance continues to improve. Before the introduction of digital computing, practices were determined largely by prior experience. In anticipation of mandatory reliability rules, practices evolved from a focus primarily on commercial and equity issues to an increased focus on reliability. This evolution is expected to continue and place increased requirements for more precise measurements and a stronger scientific basis for future frequency management practices in support of reliability.

  15. Understanding the aerosol information content in multi-spectral reflectance measurements using a synergetic retrieval algorithm

    Directory of Open Access Journals (Sweden)

    D. Martynenko

    2010-11-01

    Full Text Available An information content analysis for multi-wavelength SYNergetic AErosol Retrieval algorithm SYNAER was performed to quantify the number of independent pieces of information that can be retrieved. In particular, the capability of SYNAER to discern various aerosol types is assessed. This information content depends on the aerosol optical depth, the surface albedo spectrum and the observation geometry. The theoretical analysis is performed for a large number of scenarios with various geometries and surface albedo spectra for ocean, soil and vegetation. When the surface albedo spectrum and its accuracy is known under cloud-free conditions, reflectance measurements used in SYNAER is able to provide for 2–4° of freedom that can be attributed to retrieval parameters: aerosol optical depth, aerosol type and surface albedo.

    The focus of this work is placed on an information content analysis with emphasis to the aerosol type classification. This analysis is applied to synthetic reflectance measurements for 40 predefined aerosol mixtures of different basic components, given by sea salt, mineral dust, biomass burning and diesel aerosols, water soluble and water insoluble aerosols. The range of aerosol parameters considered through the 40 mixtures covers the natural variability of tropospheric aerosols. After the information content analysis performed in Holzer-Popp et al. (2008 there was a necessity to compare derived degrees of freedom with retrieved aerosol optical depth for different aerosol types, which is the main focus of this paper.

    The principle component analysis was used to determine the correspondence between degrees of freedom for signal in the retrieval and derived aerosol types. The main results of the analysis indicate correspondence between the major groups of the aerosol types, which are: water soluble aerosol, soot, mineral dust and sea salt and degrees of freedom in the algorithm and show the ability of the SYNAER to

  16. A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements

    Science.gov (United States)

    Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.

    2016-01-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.

  17. A modified scout bee for artificial bee colony algorithm and its performance on optimization problems

    Directory of Open Access Journals (Sweden)

    Syahid Anuar

    2016-10-01

    Full Text Available The artificial bee colony (ABC is one of the swarm intelligence algorithms used to solve optimization problems which is inspired by the foraging behaviour of the honey bees. In this paper, artificial bee colony with the rate of change technique which models the behaviour of scout bee to improve the performance of the standard ABC in terms of exploration is introduced. The technique is called artificial bee colony rate of change (ABC-ROC because the scout bee process depends on the rate of change on the performance graph, replace the parameter limit. The performance of ABC-ROC is analysed on a set of benchmark problems and also on the effect of the parameter colony size. Furthermore, the performance of ABC-ROC is compared with the state of the art algorithms.

  18. CITYkeys Smart city performance measurement system

    NARCIS (Netherlands)

    Huovila, A.; Airaksinen, M.; Pinto-Seppa, I.; Piira, K.; Bosch, P.R.; Penttinen, T.; Neumann, H.M.; Kontinakis, N.

    2017-01-01

    Cities are tackling their economic, social and environmental challenges through smart city solutions. To demonstrate that these solutions achieve the desired impact, an indicator-based assessment system is needed. This paper presents the process of developing CITYkeys performance measurement system

  19. The Validity of Subjective Performance Measures

    DEFF Research Database (Denmark)

    Meier, Kenneth J.; Winter, Søren C.; O'Toole, Laurence J.

    2015-01-01

    to provide, and are highly policy specific rendering generalization difficult. But are perceptual performance measures valid, and do they generate unbiased findings? We examine these questions in a comparative study of middle managers in schools in Texas and Denmark. The findings are remarkably similar...

  20. Performance Measurement in Helicopter Training and Operations.

    Science.gov (United States)

    Prophet, Wallace W.

    For almost 15 years, HumRRO Division No. 6 has conducted an active research program on techniques for measuring the flight performance of helicopter trainees and pilots. This program addressed both the elemental aspects of flying (i.e., maneuvers) and the mission- or goal-oriented aspects. A variety of approaches has been investigated, with the…

  1. Testing for Distortions in Performance Measures

    DEFF Research Database (Denmark)

    Sloof, Randolph; Van Praag, Mirjam

    2015-01-01

    Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (Review of Economics and Statistics, 90, 428-441) to detect whether the widely...

  2. Testing for Distortions in Performance Measures

    DEFF Research Database (Denmark)

    Sloof, Randolph; Van Praag, Mirjam

    Distorted performance measures in compensation contracts elicit suboptimal behavioral responses that may even prove to be dysfunctional (gaming). This paper applies the empirical test developed by Courty and Marschke (2008) to detect whether the widely used class of Residual Income based performa...

  3. Performance measurement in industrial R&D

    NARCIS (Netherlands)

    Kerssens-van Drongelen, I.C.; Nixon, Bill; Pearson, Alan

    2000-01-01

    Currently, the need for R&D performance measurements that are both practically useful and theoretically sound seems to be generally acknowledged; indeed, the rising cost of R&D, greater emphasis on value management and a trend towards decentralization are escalating the need for ways of evaluating

  4. External Innovation Implementation Determinants and Performance Measurement

    DEFF Research Database (Denmark)

    Coates, Matthew; Bals, Lydia

    2013-01-01

    for innovation implementation based on a case study in the pharmaceutical industry. The results of 25 expert interviews and a survey with 67 respondents led to the resulting framework and a corresponding performance measurement system. The results reveal the importance of supporting systems and show differences...

  5. 20 CFR 638.302 - Performance measurement.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Performance measurement. 638.302 Section 638.302 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR JOB CORPS PROGRAM UNDER TITLE IV-B OF THE JOB TRAINING PARTNERSHIP ACT Funding, Site Selection, and Facilities Management...

  6. Tools for Measuring and Improving Performance.

    Science.gov (United States)

    Jurow, Susan

    1993-01-01

    Explains the need for meaningful performance measures in libraries and the Total Quality Management (TQM) approach to data collection. Five tools representing different stages of a TQM inquiry are covered (i.e., the Shewhart Cycle, flowcharts, cause-and-effect diagrams, Pareto charts, and control charts), and benchmarking is addressed. (Contains…

  7. Performance improvement of VAV air conditioning system through feedforward compensation decoupling and genetic algorithm

    International Nuclear Information System (INIS)

    Wang Jun; Wang Yan

    2008-01-01

    VAV (variable air volume) control system has the feature of multi-control loops. While all the control loops are working together, they interfere and influence each other. This paper designs the decoupling compensation unit in VAV system in the method of feedforward compensation. This paper also designs the controller parameters of VAV system by means of inverse deducing and the genetic algorithm. Experimental results demonstrate that the combination of the feedforward compensation decoupling and the controller optimization by genetic algorithm can improve the performance of the VAV control system

  8. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  9. Performance measures for world class maintenance

    International Nuclear Information System (INIS)

    Labib, A.W.

    1998-01-01

    A main problem in maintenance in general, and in power plants and related equipment in particular, is the lack of a practical, consistent, and adaptive performance measure that provides a focused feedback and integrates preventive and corrective modes of maintenance. The presentation defines concepts of world class and benchmarking. Desirable features in an appropriate performance measure are identified. It then, demonstrates current practices in maintenance and criticises their shortcomings. An alternative model is presented through a case study. The model monitors performance from a general view, and then offers a focused analysis. The main conclusion is that the proposed model offers an adaptive and a dynamic framework, and hence production and maintenance are integrated in a 'real time' environment. The system is also flexible in working with any other criteria whether they are of a quantitative or a qualitative nature. (orig.) 16 refs

  10. Performance measures for world class maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Labib, A.W. [Department of Mechanical Engineering, University of Manchester, Institute of Science and Technology, Manchester (United Kingdom)

    1998-12-31

    A main problem in maintenance in general, and in power plants and related equipment in particular, is the lack of a practical, consistent, and adaptive performance measure that provides a focused feedback and integrates preventive and corrective modes of maintenance. The presentation defines concepts of world class and benchmarking. Desirable features in an appropriate performance measure are identified. It then, demonstrates current practices in maintenance and criticises their shortcomings. An alternative model is presented through a case study. The model monitors performance from a general view, and then offers a focused analysis. The main conclusion is that the proposed model offers an adaptive and a dynamic framework, and hence production and maintenance are integrated in a `real time` environment. The system is also flexible in working with any other criteria whether they are of a quantitative or a qualitative nature. (orig.) 16 refs.

  11. Performance measures for world class maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Labib, A W [Department of Mechanical Engineering, University of Manchester, Institute of Science and Technology, Manchester (United Kingdom)

    1999-12-31

    A main problem in maintenance in general, and in power plants and related equipment in particular, is the lack of a practical, consistent, and adaptive performance measure that provides a focused feedback and integrates preventive and corrective modes of maintenance. The presentation defines concepts of world class and benchmarking. Desirable features in an appropriate performance measure are identified. It then, demonstrates current practices in maintenance and criticises their shortcomings. An alternative model is presented through a case study. The model monitors performance from a general view, and then offers a focused analysis. The main conclusion is that the proposed model offers an adaptive and a dynamic framework, and hence production and maintenance are integrated in a `real time` environment. The system is also flexible in working with any other criteria whether they are of a quantitative or a qualitative nature. (orig.) 16 refs.

  12. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    International Nuclear Information System (INIS)

    Siegel, A.R.; Smith, K.; Romano, P.K.; Forget, B.; Felker, K.

    2013-01-01

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes

  13. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    Science.gov (United States)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  14. Classic (Nonquantic) Algorithm for Observations and Measurements Based on Statistical Strategies of Particles Fields

    OpenAIRE

    Savastru, D.; Dontu, Simona; Savastru, Roxana; Sterian, Andreea Rodica

    2013-01-01

    Our knowledge about surroundings can be achieved by observations and measurements but both are influenced by errors (noise). Therefore one of the first tasks is to try to eliminate the noise by constructing instruments with high accuracy. But any real observed and measured system is characterized by natural limits due to the deterministic nature of the measured information. The present work is dedicated to the identification of these limits. We have analyzed some algorithms for selection and ...

  15. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    Science.gov (United States)

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  16. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  17. CVFEM for Multiphase Flow with Disperse and Interface Tracking, and Algorithms Performances

    Directory of Open Access Journals (Sweden)

    M. Milanez

    2015-12-01

    Full Text Available A Control-Volume Finite-Element Method (CVFEM is newly formulated within Eulerian and spatial averaging frameworks for effective simulation of disperse transport, deposit distribution and interface tracking. Their algorithms are implemented alongside an existing continuous phase algorithm. Flow terms are newly implemented for a control volume (CV fixed in a space, and the CVs' equations are assembled based on a finite element method (FEM. Upon impacting stationary and moving boundaries, the disperse phase changes its phase and the solver triggers identification of CVs with excess deposit and their neighboring CVs for its accommodation in front of an interface. The solver then updates boundary conditions on the moving interface as well as domain conditions on the accumulating deposit. Corroboration of the algorithms' performances is conducted on illustrative simulations with novel and existing Eulerian and Lagrangian solutions, such as (- other, i. e. external methods with analytical and physical experimental formulations, and (- characteristics internal to CVFEM.

  18. Trustworthiness Measurement Algorithm for TWfMS Based on Software Behaviour Entropy

    Directory of Open Access Journals (Sweden)

    Qiang Han

    2018-03-01

    Full Text Available As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow management system (TWfMS, the design of a software behaviour trustworthiness measurement algorithm is an urgent task for researchers. Accompanying the trustworthiness mechanism, the measurement algorithm, with uncertain software behaviour trustworthiness information of the WfMS, should be resolved as an infrastructure. Based on the framework presented in our research prior to this paper, we firstly introduce a formal model for the WfMS trustworthiness measurement, with the main property reasoning based on calculus operators. Secondly, this paper proposes a novel measurement algorithm from the software behaviour entropy of calculus operators through the principle of maximum entropy (POME and the data mining method. Thirdly, the trustworthiness measurement algorithm for incomplete software behaviour tests and runtime information is discussed and compared by means of a detailed explanation. Finally, we provide conclusions and discuss certain future research areas of the TWfMS.

  19. A method exploiting direct communication between phasor measurement units for power system wide-area protection and control algorithms.

    Science.gov (United States)

    Almas, Muhammad Shoaib; Vanfretti, Luigi

    2017-01-01

    Synchrophasor measurements from Phasor Measurement Units (PMUs) are the primary sensors used to deploy Wide-Area Monitoring, Protection and Control (WAMPAC) systems. PMUs stream out synchrophasor measurements through the IEEE C37.118.2 protocol using TCP/IP or UDP/IP. The proposed method establishes a direct communication between two PMUs, thus eliminating the requirement of an intermediate phasor data concentrator, data mediator and/or protocol parser and thereby ensuring minimum communication latency without considering communication link delays. This method allows utilizing synchrophasor measurements internally in a PMU to deploy custom protection and control algorithms. These algorithms are deployed using protection logic equations which are supported by all the PMU vendors. Moreover, this method reduces overall equipment cost as the algorithms execute internally in a PMU and therefore does not require any additional controller for their deployment. The proposed method can be utilized for fast prototyping of wide-area measurements based protection and control applications. The proposed method is tested by coupling commercial PMUs as Hardware-in-the-Loop (HIL) with Opal-RT's eMEGAsim Real-Time Simulator (RTS). As illustrative example, anti-islanding protection application is deployed using proposed method and its performance is assessed. The essential points in the method are: •Bypassing intermediate phasor data concentrator or protocol parsers as the synchrophasors are communicated directly between the PMUs (minimizes communication delays).•Wide Area Protection and Control Algorithm is deployed using logic equations in the client PMU, therefore eliminating the requirement for an external hardware controller (cost curtailment)•Effortless means to exploit PMU measurements in an environment familiar to protection engineers.

  20. Improvement an enterprises marketing performance measurement system

    Directory of Open Access Journals (Sweden)

    Stanković Ljiljana

    2013-01-01

    Full Text Available Business conditions in which modern enterprises do business are more and more complex. The complexity of the business environment is caused by activities of external and internal factors, which imposes the need for the turn in management focus. One of key turns is related to the need of adaptation and development of new business performance evaluation systems. The evaluation of marketing contribution to business performance is very important however a complex task as well. The marketing theory and practice indicates the need for developing adequate standards and systems for evaluating the efficiency of marketing decisions. The better understanding of marketing standards and ways that managers use is a very important factor that affects the efficiency of strategic decision-making. The paper presents the results of researching the way in which managers perceive and apply marketing performance measures. The data that were received through the field research sample enabled the consideration of the managers' attitudes on practical ways of implementing marketing performance measurement and identifying measures that managers imply as used mostly in business practice.

  1. Total performance measurement and management: TPM2

    Energy Technology Data Exchange (ETDEWEB)

    Sheather, G. [University of Technology, Sydney, NSW (Australia)

    1996-10-01

    As the rate of incremental improvement activities and business process re-engineering programs increase, product development times reduce, collaborative endeavours between OEMs and out-sourcing suppliers increase, as agile manufacturing responds to the demands of a global marketplace, the `virtual` organisation is becoming a reality. In this context, customers, partners, suppliers and manufacturers are increasingly separated by field geography, time zone, and availability, but linked by distributed information systems. Measuring and monitoring business performance in this environment requires a entirely different framework and set of key performance indicators (KPIs) usually associated with traditional financial accounting approaches. These approaches are critiqued, then the paper introduces a new concept `Total Performance Measurement Management` (TPM2), to distinguish it from the conventional TPM (Total Productive Management). A model for combining both financial and non-financial KPIs relevant to real-time performance measures stretching across strategic, business unit and operational levels, is presented. The results of the model confirm the hypothesis that it is feasible to develop a TPM2 framework for achieving enterprise wide strategic objectives. (author). 6 tabs., 18 figs., refs.

  2. Performance expectations of measurement control programs

    International Nuclear Information System (INIS)

    Hammond, G.A.

    1985-01-01

    The principal index for designing and assessing the effectiveness of safeguards is the sensitivity and reliability of gauging the true status of material balances involving material flows, transfers, inventories, and process holdup. The measurement system must not only be capable of characterizing the material for gradation or intensity of protection, but also be responsive to needs for detection and localization of losses, provide confirmation that no diversion has occurred, and help meet requirements for process control, health and safety. Consequently, the judicious application of a measurement control and quality assurance program is vital to a complete understanding of the capabilities and limitations of the measurement system including systematic and random components of error for weight, volume, sampling, chemical, isotopic, and nondestructive determinations of material quantities in each material balance area. This paper describes performance expectations or criteria for a measurement control program in terms of ''what'' is desired and ''why'', relative to safeguards and security objectives

  3. Analysis of a new phase and height algorithm in phase measurement profilometry

    Science.gov (United States)

    Bian, Xintian; Zuo, Fen; Cheng, Ju

    2018-04-01

    Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.

  4. Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency

    KAUST Repository

    Ltaief, Hatem

    2011-08-31

    This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine-grained task parallelism that recasts the computation to operate on submatrices called tiles. In this way tile algorithms are formed. We show results from the power profiling of the most common routines, which permits us to clearly identify the different phases of the computations. This allows us to isolate the bottlenecks in terms of energy efficiency. Our results show that PLASMA surpasses LAPACK not only in terms of performance but also in terms of energy efficiency. © 2011 Springer-Verlag.

  5. Coming up short on nonfinancial performance measurement.

    Science.gov (United States)

    Ittner, Christopher D; Larcker, David F

    2003-11-01

    Companies in increasing numbers are measuring customer loyalty, employee satisfaction, and other nonfinancial areas of performance that they believe affect profitability. But they've failed to relate these measures to their strategic goals or establish a connection between activities undertaken and financial outcomes achieved. Failure to make such connections has led many companies to misdirect their investments and reward ineffective managers. Extensive field research now shows that businesses make some common mistakes when choosing, analyzing, and acting on their nonfinancial measures. Among these mistakes: They set the wrong performance targets because they focus too much on short-term financial results, and they use metrics that lack strong statistical validity and reliability. As a result, the companies can't demonstrate that improvements in nonfinancial measures actually affect their financial results. The authors lay out a series of steps that will allow companies to realize the genuine promise of nonfinancial performance measures. First, develop a model that proposes a causal relationship between the chosen nonfinancial drivers of strategic success and specific outcomes. Next, take careful inventory of all the data within your company. Then use established statistical methods for validating the assumed relationships and continue to test the model as market conditions evolve. Finally, base action plans on analysis of your findings, and determine whether those plans and their investments actually produce the desired results. Nonfinancial measures will offer little guidance unless you use a process for choosing and analyzing them that relies on sophisticated quantitative and qualitative inquiries into the factors actually contributing to economic results.

  6. Performance evaluation of 2D image registration algorithms with the numeric image registration and comparison platform

    International Nuclear Information System (INIS)

    Gerganov, G.; Kuvandjiev, V.; Dimitrova, I.; Mitev, K.; Kawrakow, I.

    2012-01-01

    The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)

  7. New approach for measuring 3D space by using Advanced SURF Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Youm, Minkyo; Min, Byungil; Suh, Kyungsuk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Backgeun [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2013-05-15

    The nuclear disasters compared to natural disaster create a more extreme condition for analyzing and evaluating. In this paper, measuring 3D space and modeling was studied by simple pictures in case of small sand dune. The suggested method can be used for the acquisition of spatial information by robot at the disaster area. As a result, these data are helpful for identify the damaged part, degree of damage and determination of recovery sequences. In this study we are improving computer vision algorithm for 3-D geo spatial information measurement. And confirm by test. First, we can get noticeable improvement of 3-D geo spatial information result by SURF algorithm and photogrammetry surveying. Second, we can confirm not only decrease algorithm running time, but also increase matching points through epi polar line filtering. From the study, we are extracting 3-D model by open source algorithm and delete miss match point by filtering method. However on characteristic of SURF algorithm, it can't find match point if structure don't have strong feature. So we will need more study about find feature point if structure don't have strong feature.

  8. Approaches towards airport economic performance measurement

    Directory of Open Access Journals (Sweden)

    Ivana STRYČEKOVÁ

    2011-01-01

    Full Text Available The paper aims to assess how economic benchmarking is being used by airports as a means of performance measurement and comparison of major international airports in the world. The study focuses on current benchmarking practices and methods by taking into account different factors according to which it is efficient to benchmark airports performance. As methods are considered mainly data envelopment analysis and stochastic frontier analysis. Apart from them other approaches are discussed by airports to provide economic benchmarking. The main objective of this article is to evaluate the efficiency of the airports and answer some undetermined questions involving economic benchmarking of the airports.

  9. MODERN INSTRUMENTS FOR MEASURING ORGANIZATIONAL PERFORMANCE

    Directory of Open Access Journals (Sweden)

    RADU CATALINA

    2010-12-01

    Full Text Available Any significant management action can be assessed both in terms of success of immediate goals and as effect of the organization ability to embrace change. Market competition intensifies with the development of Romanian society and its needs. Companies that offer different products and services need to impose certain advantages and to increase their performances. The paper will present modern tools for measuring and evaluating organizational performance, namely: Balanced Scorecard, Deming model and Baldrige model. We also present an example for Balance Scorecard, of an organizations belonging to the cosmetics industry.

  10. Strategic Performance Measurement of Research and Development

    DEFF Research Database (Denmark)

    Parisi, Cristiana; Rossi, Paola

    2015-01-01

    The paper used an in depth case study to investigate how firms can integrate the strategic performance measurement of R&D with the Balanced Scorecard. Moreover, the paper investigated the crucial role of controller in the decision making process of this integration.The literature review of R......-financial ratio as the R&D measures to introduce in the Balanced Scorecard.In choosing our case study, we have selected the pharmaceutical industry because of its relevant R&D investment. Within the sector we chose the Italian affiliate of a traditional industry leader, Eli Lilly Italia,that was characterized...

  11. The performance of the ATLAS Inner Detector Trigger algorithms in pp collisions at the LHC

    International Nuclear Information System (INIS)

    Sutton, Mark

    2011-01-01

    The ATLAS [The ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3:S08003, 2008 (2008)] Inner Detector trigger algorithms have been running online during data taking with proton-proton collisions at the Large Hadron Collider (LHC) since December 2009. Preliminary results on the performance of the algorithms in collisions at centre-of-mass energies of 900 GeV and 7 TeV, are discussed. The ATLAS trigger performs the online event selection in three stages. The Inner Detector information is used in the second and third triggering stages, referred to as Level-2 trigger (L2) and Event Filter (EF) respectively, or collectively as the High Level Trigger (HLT). The HLT runs software algorithms on large farms of commercial CPUs and is designed to reject collision events in real time, keeping the most interesting few events in every thousand. The average execution times per event at L2 and the EF are around 40 ms and 4 s respectively and the Inner Detector trigger algorithms can use only a fraction of these times. Within these times, data from interesting regions of the Inner Detector have to be read out through the network, unpacked, clustered and converted to the ATLAS global coordinates. The pattern recognition follows to identify the trajectories of charged particles (tracks), which are then used in combination with information from the other subdetectors to accept or reject events depending on whether they satisfy certain trigger signatures.

  12. Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students

    Science.gov (United States)

    Zoller, Uri

    2002-02-01

    The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.

  13. A Thermographic Measurement Approach to Assess Supercapacitor Electrical Performances

    Directory of Open Access Journals (Sweden)

    Stanislaw Galla

    2017-12-01

    Full Text Available This paper describes a proposal for the qualitative assessment of condition of supercapacitors based on the conducted thermographic measurements. The presented measurement stand was accompanied by the concept of methodology of performing tests. Necessary conditions, which were needed to minimize the influence of disturbing factors on the performance of thermal imaging measurements, were also indicated. Mentioned factors resulted from both: the hardware limitations and from the necessity to prepare samples. The algorithm that was used to determine the basic parameters for assessment has been presented. The article suggests to use additional factors that may facilitate the analysis of obtained results. Measuring the usefulness of the proposed methodology was tested on commercial samples of supercapacitors. All of the tests were taken in conjunction with the classical methods based on capacitance (C and equivalent series resistance (ESR measurements, which were also presented in the paper. Selected results presenting the observed changes occurring in both: basic parameters of supercapacitors and accompanying fluctuations of thermal fields, along with analysis, were shown. The observed limitations of the proposed assessment method and the suggestions for its development were also described.

  14. Key indicators for organizational performance measurement

    Directory of Open Access Journals (Sweden)

    Firoozeh Haddadi

    2014-09-01

    Full Text Available Each organization for assessing the amount of utility and desirability of their activities, especially in complex and dynamic environments, requires determining and ranking the vital performance indicators. Indicators provide essential links among strategy, execution and ultimate value creation. The aim of this paper is to develop a framework, which identifies and prioritizes Key Performance Indicators (KPIs that a company should focus on them to define and measure progress towards organizational objectives. For this purpose, an applied research was conducted in 2013 in an Iranian telecommunication company. We first determined the objectives of the company with respect to four perspectives of BSC (Balanced Scorecard framework. Next, performance indicators were listed and paired wise comparisons were accomplished by company's high-ranked employees through standard Analytic Hierarchy Process (AHP questionnaires. This helped us establish the weight of each indicator and to rank them, accordingly.

  15. New algorithm using only one variable measurement applied to a maximum power point tracker

    Energy Technology Data Exchange (ETDEWEB)

    Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology

    2005-05-01

    A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)

  16. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  17. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  18. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  19. Properties of the center of gravity as an algorithm for position measurements: Two-dimensional geometry

    CERN Document Server

    Landi, Gregorio

    2003-01-01

    The center of gravity as an algorithm for position measurements is analyzed for a two-dimensional geometry. Several mathematical consequences of discretization for various types of detector arrays are extracted. Arrays with rectangular, hexagonal, and triangular detectors are analytically studied, and tools are given to simulate their discretization properties. Special signal distributions free of discretized error are isolated. It is proved that some crosstalk spreads are able to eliminate the center of gravity discretization error for any signal distribution. Simulations, adapted to the CMS em-calorimeter and to a triangular detector array, are provided for energy and position reconstruction algorithms with a finite number of detectors.

  20. Evaluation of Cutting Performance of Diamond Saw Machine Using Artificial Bee Colony (ABC Algorithm

    Directory of Open Access Journals (Sweden)

    Masoud Akhyani

    2017-12-01

    Full Text Available Artificial Intelligence (AI techniques are used for solving the intractable engineering problems. In this study, it is aimed to study the application of artificial bee colony algorithm for predicting the performance of circular diamond saw in sawing of hard rocks. For this purpose, varieties of fourteen types of hard rocks were cut in laboratory using a cutting rig at 5 mm depth of cut, 40 cm/min feed rate and 3000 rpm peripheral speed. Four major mechanical and physical properties of studied rocks such as uniaxial compressive strength (UCS, Schimazek abrasivity factor (SF-a, Mohs hardness (Mh, and Young’s modulus (Ym were determined in rock mechanic laboratory. Artificial bee colony (ABC was used to classify the performance of circular diamond saw based on mentioned mechanical properties of rocks. Ampere consumption and wear rate of diamond saw were selected as criteria to evaluate the result of ABC algorithm. Ampere consumption was determined during cutting process and the average wear rate of diamond saw was calculated from width, length and height loss. The results of comparison between ABC’s results and cutting performance (ampere consumption and wear rate of diamond saw indicated the ability of metaheuristic algorithm such as ABC to evaluate the cutting performance.

  1. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  2. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  3. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  4. APMS 3.0 Flight Analyst Guide: Aviation Performance Measuring System

    Science.gov (United States)

    Jay, Griff; Prothero, Gary; Romanowski, Timothy; Lynch, Robert; Lawrence, Robert; Rosenthal, Loren

    2004-01-01

    The Aviation Performance Measuring System (APMS) is a method-embodied in software-that uses mathematical algorithms and related procedures to analyze digital flight data extracted from aircraft flight data recorders. APMS consists of an integrated set of tools used to perform two primary functions: a) Flight Data Importation b) Flight Data Analysis.

  5. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    International Nuclear Information System (INIS)

    Caillet, V; Colvill, E; O’Brien, R; Keall, P; Poulsen, P; Moore, D; Booth, J; Sawant, A

    2016-01-01

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physical experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use

  6. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Caillet, V; Colvill, E [School of Medecine, The University of Sydney, Sydney, NSW (Australia); Royal North Shore Hospital, St Leonards, Sydney (Australia); O’Brien, R; Keall, P [School of Medecine, The University of Sydney, Sydney, NSW (Australia); Poulsen, P [Aarhus University Hospital, Aarhus (Denmark); Moore, D [UT Southwestern Medical Center, Dallas, TX (United States); University of Maryland School of Medicine, Baltimore, MD (United States); Booth, J [Royal North Shore Hospital, St Leonards, Sydney (Australia); Sawant, A [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physical experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use

  7. Assessment of two aerosol optical thickness retrieval algorithms applied to MODIS Aqua and Terra measurements in Europe

    Directory of Open Access Journals (Sweden)

    P. Glantz

    2012-07-01

    Full Text Available The aim of the present study is to validate AOT (aerosol optical thickness and Ångström exponent (α, obtained from MODIS (MODerate resolution Imaging Spectroradiometer Aqua and Terra calibrated level 1 data (1 km horizontal resolution at ground with the SAER (Satellite AErosol Retrieval algorithm and with MODIS Collection 5 (c005 standard product retrievals (10 km horizontal resolution, against AERONET (AErosol RObotic NETwork sun photometer observations over land surfaces in Europe. An inter-comparison of AOT at 0.469 nm obtained with the two algorithms has also been performed. The time periods investigated were chosen to enable a validation of the findings of the two algorithms for a maximal possible variation in sun elevation. The satellite retrievals were also performed with a significant variation in the satellite-viewing geometry, since Aqua and Terra passed the investigation area twice a day for several of the cases analyzed. The validation with AERONET shows that the AOT at 0.469 and 0.555 nm obtained with MODIS c005 is within the expected uncertainty of one standard deviation of the MODIS c005 retrievals (ΔAOT = ± 0.05 ± 0.15 · AOT. The AOT at 0.443 nm retrieved with SAER, but with a much finer spatial resolution, also agreed reasonably well with AERONET measurements. The majority of the SAER AOT values are within the MODIS c005 expected uncertainty range, although somewhat larger average absolute deviation occurs compared to the results obtained with the MODIS c005 algorithm. The discrepancy between AOT from SAER and AERONET is, however, substantially larger for the wavelength 488 nm. This means that the values are, to a larger extent, outside of the expected MODIS uncertainty range. In addition, both satellite retrieval algorithms are unable to estimate α accurately, although the MODIS c005 algorithm performs better. Based on the inter-comparison of the SAER and MODIS c005 algorithms, it was found that SAER on the whole is

  8. Performance of operational satellite bio-optical algorithms in different water types in the southeastern Arabian Sea

    Directory of Open Access Journals (Sweden)

    P. Minu

    2016-10-01

    Full Text Available The in situ remote sensing reflectance (Rrs and optically active substances (OAS measured using hyperspectral radiometer, were used for optical classification of coastal waters in the southeastern Arabian Sea. The spectral Rrs showed three distinct water types, that were associated with the variability in OAS such as chlorophyll-a (chl-a, chromophoric dissolved organic matter (CDOM and volume scattering function at 650 nm (β650. The water types were classified as Type-I, Type-II and Type-III respectively for the three Rrs spectra. The Type-I waters showed the peak Rrs in the blue band (470 nm, whereas in the case of Type-II and III waters the peak Rrs was at 560 and 570 nm respectively. The shifting of the peak Rrs at the longer wavelength was due to an increase in concentration of OAS. Further, we evaluated six bio-optical algorithms (OC3C, OC4O, OC4, OC4E, OC3M and OC4O2 used operationally to retrieve chl-a from Coastal Zone Colour Scanner (CZCS, Ocean Colour Temperature Scanner (OCTS, Sea-viewing Wide Field-of-view Sensor (SeaWiFS, MEdium Resolution Imaging Spectrometer (MERIS, Moderate Resolution Imaging Spectroradiometer (MODIS and Ocean Colour Monitor (OCM2. For chl-a concentration greater than 1.0 mg m−3, algorithms based on the reference band ratios 488/510/520 nm to 547/550/555/560/565 nm have to be considered. The assessment of algorithms showed better performance of OC3M and OC4. All the algorithms exhibited better performance in Type-I waters. However, the performance was poor in Type-II and Type-III waters which could be attributed to the significant co-variance of chl-a with CDOM.

  9. A novel vibration-based fault diagnostic algorithm for gearboxes under speed fluctuations without rotational speed measurement

    Science.gov (United States)

    Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh; Sheng, Shuangwen; Tan, Yuegang; Zhou, Zude

    2017-09-01

    The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is often unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. The results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.

  10. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  11. Optics measurement algorithms and error analysis for the proton energy frontier

    Science.gov (United States)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  12. Toward a High Performance Tile Divide and Conquer Algorithm for the Dense Symmetric Eigenvalue Problem

    KAUST Repository

    Haidar, Azzam

    2012-01-01

    Classical solvers for the dense symmetric eigenvalue problem suffer from the first step, which involves a reduction to tridiagonal form that is dominated by the cost of accessing memory during the panel factorization. The solution is to reduce the matrix to a banded form, which then requires the eigenvalues of the banded matrix to be computed. The standard divide and conquer algorithm can be modified for this purpose. The paper combines this insight with tile algorithms that can be scheduled via a dynamic runtime system to multicore architectures. A detailed analysis of performance and accuracy is included. Performance improvements of 14-fold and 4-fold speedups are reported relative to LAPACK and Intel\\'s Math Kernel Library.

  13. Performance evaluation of Genetic Algorithms on loading pattern optimization of PWRs

    International Nuclear Information System (INIS)

    Tombakoglu, M.; Bekar, K.B.; Erdemli, A.O.

    2001-01-01

    Genetic Algorithm (GA) based systems are used for search and optimization problems. There are several applications of GAs in literature successfully applied for loading pattern optimization problems. In this study, we have selected loading pattern optimization problem of Pressurised Water Reactor (PWR). The main objective of this work is to evaluate the performance of Genetic Algorithm operators such as regional crossover, crossover and mutation, and selection and construction of initial population and its size for PWR loading pattern optimization problems. The performance of GA with antithetic variates is compared to traditional GA. Antithetic variates are used to generate the initial population and its use with GA operators are also discussed. Finally, the results of multi-cycle optimization problems are discussed for objective function taking into account cycle burn-up and discharge burn-up.(author)

  14. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    Science.gov (United States)

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Team performance measures for abnormal plant operations

    International Nuclear Information System (INIS)

    Montgomery, J.C.; Seaver, D.A.; Holmes, C.W.; Gaddy, C.D.; Toquam, J.L.

    1990-01-01

    In order to work effectively, control room crews need to possess well-developed team skills. Extensive research supports the notion that improved quality and effectiveness are possible when a group works together, rather than as individuals. The Nuclear Regulatory Commission (NRC) has recognized the role of team performance in plant safety and has attempted to evaluate licensee performance as part of audits, inspections, and reviews. However, reliable and valid criteria for team performance have not yet been adequately developed. The purpose of the present research was to develop such reliable and valid measures of team skills. Seven dimensions of team skill performance were developed on the basis of input from NRC operator licensing examiners and from the results of previous research and experience in the area. These dimensions included two-way communications, resource management, inquiry, advocacy, conflict resolution/decision-making, stress management, and team spirit. Several different types of rating formats were developed for use with these dimensions, including a modified Behaviorally Anchored Rating Scale (BARS) format and a Behavioral Frequency format. Following pilot-testing and revision, observer and control room crew ratings of team performance were obtained using 14 control room crews responding to simulator scenarios at a BWR and a PWR reactor. It is concluded, overall, that the Behavioral Frequency ratings appeared quite promising as a measure of team skills but that additional statistical analyses and other follow-up research are needed to refine several of the team skills dimensions and to make the scales fully functional in an applied setting

  16. Comparison of algorithms for determination of rotation measure and Faraday structure. I. 1100–1400 MHz

    International Nuclear Information System (INIS)

    Sun, X. H.; Akahori, Takuya; Anderson, C. S.; Farnes, J. S.; O’Sullivan, S. P.; Rudnick, L.; O’Brien, T.; Bell, M. R.; Bray, J. D.; Scaife, A. M. M.; Ideguchi, S.; Kumazaki, K.; Stepanov, R.; Stil, J.; Wolleben, M.; Takahashi, K.; Weeren, R. J. van

    2015-01-01

    Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, RM wtd , (2) the separation Δϕ of two Faraday components, and (3) the reduced chi-squared χ r 2 . Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RM wtd but with significantly higher errors for Δϕ. All other methods, including standard Faraday synthesis, frequently identify only one component when Δϕ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well

  17. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  18. An Algorithm for Glaucoma Screening in Clinical Settings and Its Preliminary Performance Profile

    Directory of Open Access Journals (Sweden)

    S-Farzad Mohammadi

    2013-01-01

    Full Text Available Purpose: To devise and evaluate a screening algorithm for glaucoma in clinical settings. Methods: Screening included examination of the optic disc for vertical cupping (≥0.4 and asymmetry (≥0.15, Goldmann applanation tonometry (≥21 mmHg, adjusted or unadjusted for central corneal thickness, and automated perimetry. In the diagnostic step, retinal nerve fiber layer imaging was performed using scanning laser polarimetry. Performance of the screening protocol was assessed in an eye hospital-based program in which 124 non-physician personnel aged 40 years or above were examined. A single ophthalmologist carried out the examinations and in equivocal cases, a glaucoma subspecialist′s opinion was sought. Results: Glaucoma was diagnosed in six cases (prevalence 4.8%; 95% confidence interval, 0.01-0.09 of whom five were new. The likelihood of making a definite diagnosis of glaucoma for those who were screened positively was 8.5 times higher than the estimated baseline risk for the reference population; the positive predictive value of the screening protocol was 30%. Screening excluded 80% of the initial population. Conclusion: Application of a formal screening protocol (such as our algorithm or its equivalent in clinical settings can be helpful in detecting new cases of glaucoma. Preliminary performance assessment of the algorithm showed its applicability and effectiveness in detecting glaucoma among subjects without any visual complaint.

  19. Artificial Bee Colony Algorithm for Transient Performance Augmentation of Grid Connected Distributed Generation

    Science.gov (United States)

    Chatterjee, A.; Ghoshal, S. P.; Mukherjee, V.

    In this paper, a conventional thermal power system equipped with automatic voltage regulator, IEEE type dual input power system stabilizer (PSS) PSS3B and integral controlled automatic generation control loop is considered. A distributed generation (DG) system consisting of aqua electrolyzer, photovoltaic cells, diesel engine generator, and some other energy storage devices like flywheel energy storage system and battery energy storage system is modeled. This hybrid distributed system is connected to the grid. While integrating this DG with the onventional thermal power system, improved transient performance is noticed. Further improvement in the transient performance of this grid connected DG is observed with the usage of superconducting magnetic energy storage device. The different tunable parameters of the proposed hybrid power system model are optimized by artificial bee colony (ABC) algorithm. The optimal solutions offered by the ABC algorithm are compared with those offered by genetic algorithm (GA). It is also revealed that the optimizing performance of the ABC is better than the GA for this specific application.

  20. On the performance of SART and ART algorithms for microwave imaging

    Science.gov (United States)

    Aprilliyani, Ria; Prabowo, Rian Gilang; Basari

    2018-02-01

    The development of advanced technology leads to the change of human lifestyle in current society. One of the disadvantage impact is arising the degenerative diseases such as cancers and tumors, not just common infectious diseases. Every year, victims of cancers and tumors grow significantly leading to one of the death causes in the world. In early stage, cancer/tumor does not have definite symptoms, but it will grow abnormally as tissue cells and damage normal tissue. Hence, early cancer detection is required. Some common diagnostics modalities such as MRI, CT and PET are quite difficult to be operated in home or mobile environment such as ambulance. Those modalities are also high cost, unpleasant, complex, less safety and harder to move. Hence, this paper proposes a microwave imaging system due to its portability and low cost. In current study, we address on the performance of simultaneous algebraic reconstruction technique (SART) algorithm that was applied in microwave imaging. In addition, SART algorithm performance compared with our previous work on algebraic reconstruction technique (ART), in order to have performance comparison, especially in the case of reconstructed image quality. The result showed that by applying SART algorithm on microwave imaging, suspicious cancer/tumor can be detected with better image quality.

  1. Performance analysis of multidimensional wavefront algorithms with application to deterministic particle transport

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  2. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The performance and timing of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The o...

  3. Optimization of thermal performance of a smooth flat-plate solar air heater using teaching–learning-based optimization algorithm

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2015-12-01

    Full Text Available This paper presents the performance of teaching–learning-based optimization (TLBO algorithm to obtain the optimum set of design and operating parameters for a smooth flat plate solar air heater (SFPSAH. The TLBO algorithm is a recently proposed population-based algorithm, which simulates the teaching–learning process of the classroom. Maximization of thermal efficiency is considered as an objective function for the thermal performance of SFPSAH. The number of glass plates, irradiance, and the Reynolds number are considered as the design parameters and wind velocity, tilt angle, ambient temperature, and emissivity of the plate are considered as the operating parameters to obtain the thermal performance of the SFPSAH using the TLBO algorithm. The computational results have shown that the TLBO algorithm is better or competitive to other optimization algorithms recently reported in the literature for the considered problem.

  4. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  5. Performance measurement with fuzzy data envelopment analysis

    CERN Document Server

    Tavana, Madjid

    2014-01-01

    The intensity of global competition and ever-increasing economic uncertainties has led organizations to search for more efficient and effective ways to manage their business operations.  Data envelopment analysis (DEA) has been widely used as a conceptually simple yet powerful tool for evaluating organizational productivity and performance. Fuzzy DEA (FDEA) is a promising extension of the conventional DEA proposed for dealing with imprecise and ambiguous data in performance measurement problems. This book is the first volume in the literature to present the state-of-the-art developments and applications of FDEA. It is designed for students, educators, researchers, consultants and practicing managers in business, industry, and government with a basic understanding of the DEA and fuzzy logic concepts.

  6. IASI instrument: technical description and measured performances

    Science.gov (United States)

    Hébert, Ph.; Blumstein, D.; Buil, C.; Carlier, T.; Chalon, G.; Astruc, P.; Clauss, A.; Siméoni, D.; Tournier, B.

    2017-11-01

    IASI is an infrared atmospheric sounder. It will provide meteorologist and scientific community with atmospheric spectra. The IASI system includes 3 instruments that will be mounted on the Metop satellite series, a data processing software integrated in the EPS (EUMETSAT Polar System) ground segment and a technical expertise centre implemented in CNES Toulouse. The instrument is composed of a Fourier transform spectrometer and an associated infrared imager. The optical configuration is based on a Michelson interferometer and the interferograms are processed by an on-board digital processing subsystem, which performs the inverse Fourier transforms and the radiometric calibration. The infrared imager co-registers the IASI soundings with AVHRR imager (AVHRR is another instrument on the Metop satellite). The presentation will focus on the architectures of the instrument, the description of the implemented technologies and the measured performance of the first flight model. CNES is leading the IASI program in association with EUMETSAT. The instrument Prime is ALCATEL SPACE.

  7. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem

    2013-04-01

    This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.

  8. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  9. Prolonged exercise in type 1 diabetes: performance of a customizable algorithm to estimate the carbohydrate supplements to minimize glycemic imbalances.

    Science.gov (United States)

    Francescato, Maria Pia; Stel, Giuliana; Stenner, Elisabetta; Geat, Mario

    2015-01-01

    Physical activity in patients with type 1 diabetes (T1DM) is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres) estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1)) performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry) and supplemental carbohydrates (93% sucrose), together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS). Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1), respectively; p < 0.001), being estimated well enough by the algorithm (p = NS). Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS), the difference between the two quantities amounting to -1.0 ± 6.1 g, independent of the elapsed exercise time (time effect, p = NS). Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.

  10. Prolonged exercise in type 1 diabetes: performance of a customizable algorithm to estimate the carbohydrate supplements to minimize glycemic imbalances.

    Directory of Open Access Journals (Sweden)

    Maria Pia Francescato

    Full Text Available Physical activity in patients with type 1 diabetes (T1DM is hindered because of the high risk of glycemic imbalances. A recently proposed algorithm (named Ecres estimates well enough the supplemental carbohydrates for exercises lasting one hour, but its performance for prolonged exercise requires validation. Nine T1DM patients (5M/4F; 35-65 years; HbA1c 54 ± 13 mmol · mol(-1 performed, under free-life conditions, a 3-h walk at 30% heart rate reserve while insulin concentrations, whole-body carbohydrate oxidation rates (determined by indirect calorimetry and supplemental carbohydrates (93% sucrose, together with glycemia, were measured every 30 min. Data were subsequently compared with the corresponding values estimated by the algorithm. No significant difference was found between the estimated insulin concentrations and the laboratory-measured values (p = NS. Carbohydrates oxidation rate decreased significantly with time (from 0.84 ± 0.31 to 0.53 ± 0.24 g · min(-1, respectively; p < 0.001, being estimated well enough by the algorithm (p = NS. Estimated carbohydrates requirements were practically equal to the corresponding measured values (p = NS, the difference between the two quantities amounting to -1.0 ± 6.1 g, independent of the elapsed exercise time (time effect, p = NS. Results confirm that Ecres provides a satisfactory estimate of the carbohydrates required to avoid glycemic imbalances during moderate intensity aerobic physical activity, opening the prospect of an intriguing method that could liberate patients from the fear of exercise-induced hypoglycemia.

  11. Electron Identification Performance and First Measurement of $W \\to e + \

    CERN Document Server

    Ueno, Rynichi

    2010-01-01

    The identification of electrons is important for the ATLAS experiment because electrons are present in many interactions of interest produced at the Large Hadron Collider. A deep knowledge of the detector, the electron identification algorithms, and the calibration techniques are crucial in order to accomplish this task. This thesis work presents a Monte Carlo study using electrons from the W —> e + v process to evaluate the performance of the ATLAS electromagnetic calorimeter. A significant number of electrons was produced in the early ATLAS collision runs at centre-of-mass energies of 900 GeV and 7 TeV between November 2009 and April 2010, and their properties are presented. Finally, a first measurement of W —> e + v process with the ATLAS experiment was successfully accomplished with the first C = 1.0 nb_ 1 of data at the 7 TeV collision energy, and the properties of the W candidates are also detailed.

  12. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  13. Measuring the performance of maintenance service outsourcing.

    Science.gov (United States)

    Cruz, Antonio Miguel; Rincon, Adriana Maria Rios; Haugan, Gregory L

    2013-01-01

    The aims of this paper are (1) to identify the characteristics of maintenance service providers that directly impact maintenance service quality, using 18 independent covariables; (2) to quantify the change in risk these covariables present to service quality, measured in terms of equipment turnaround time (TAT). A survey was applied to every maintenance service provider (n = 19) for characterization purposes. The equipment inventory was characterized, and the TAT variable recorded and monitored for every work order of each service provider (N = 1,025). Finally, the research team conducted a statistical analysis to accomplish the research objectives. The results of this study offer strong empirical evidence that the most influential variables affecting the quality of maintenance service performance are the following: type of maintenance, availability of spare parts in the country, user training, technological complexity of the equipment, distance between the company and the hospital, and the number of maintenance visits performed by the company. The strength of the results obtained by the Cox model built are supported by the measure of the Rp,e(2) = 0.57 with a value of Rp,e= 0.75. Thus, the model explained 57% of the variation in equipment TAT, with moderate high positive correlation between the dependent variable (TAT) and independent variables.

  14. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    Science.gov (United States)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  15. Effects of directional microphone and adaptive multichannel noise reduction algorithm on cochlear implant performance.

    Science.gov (United States)

    Chung, King; Zeng, Fan-Gang; Acker, Kyle N

    2006-10-01

    Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.

  16. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  17. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  18. Performance Evaluation of Machine Learning Algorithms for Urban Pattern Recognition from Multi-spectral Satellite Images

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2014-03-01

    Full Text Available In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS and very high resolution (WorldView-2, Quickbird, Ikonos multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm’s performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based, have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.

  19. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  20. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and co...

  1. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    Science.gov (United States)

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.

  2. Algorithmic Foundation of Spectral Rarefaction for Measuring Satellite Imagery Heterogeneity at Multiple Spatial Scales

    Science.gov (United States)

    Rocchini, Duccio

    2009-01-01

    Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600

  3. Frameworks for Performing on Cloud Automated Software Testing Using Swarm Intelligence Algorithm: Brief Survey

    Directory of Open Access Journals (Sweden)

    Mohammad Hossain

    2018-04-01

    Full Text Available This paper surveys on Cloud Based Automated Testing Software that is able to perform Black-box testing, White-box testing, as well as Unit and Integration Testing as a whole. In this paper, we discuss few of the available automated software testing frameworks on the cloud. These frameworks are found to be more efficient and cost effective because they execute test suites over a distributed cloud infrastructure. One of the framework effectiveness was attributed to having a module that accepts manual test cases from users and it prioritize them accordingly. Software testing, in general, accounts for as much as 50% of the total efforts of the software development project. To lessen the efforts, one the frameworks discussed in this paper used swarm intelligence algorithms. It uses the Ant Colony Algorithm for complete path coverage to minimize time and the Bee Colony Optimization (BCO for regression testing to ensure backward compatibility.

  4. Performance evaluation of grid-enabled registration algorithms using bronze-standards

    CERN Document Server

    Glatard, T; Montagnat, J

    2006-01-01

    Evaluating registration algorithms is difficult due to the lack of gold standard in most clinical procedures. The bronze standard is a real-data based statistical method providing an alternative registration reference through a computationally intensive image database registration procedure. We propose in this paper an efficient implementation of this method through a grid-interfaced workflow enactor enabling the concurrent processing of hundreds of image registrations in a couple of hours only. The performances of two different grid infrastructures were compared. We computed the accuracy of 4 different rigid registration algorithms on longitudinal MRI images of brain tumors. Results showed an average subvoxel accuracy of 0.4 mm and 0.15 degrees in rotation.

  5. Enhancement of tracking performance in electro-optical system based on servo control algorithm

    Science.gov (United States)

    Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu

    2017-10-01

    Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.

  6. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    Science.gov (United States)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  7. A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements

    International Nuclear Information System (INIS)

    Yuan, Y B; Piao, W Y; Xu, J B

    2007-01-01

    The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements

  8. A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements

    Science.gov (United States)

    Yuan, Y. B.; Piao, W. Y.; Xu, J. B.

    2007-07-01

    The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements.

  9. Detection of honeycomb cell walls from measurement data based on Harris corner detection algorithm

    Science.gov (United States)

    Qin, Yan; Dong, Zhigang; Kang, Renke; Yang, Jie; Ayinde, Babajide O.

    2018-06-01

    A honeycomb core is a discontinuous material with a thin-wall structure—a characteristic that makes accurate surface measurement difficult. This paper presents a cell wall detection method based on the Harris corner detection algorithm using laser measurement data. The vertexes of honeycomb cores are recognized with two different methods: one method is the reduction of data density, and the other is the optimization of the threshold of the Harris corner detection algorithm. Each cell wall is then identified in accordance with the neighboring relationships of its vertexes. Experiments were carried out for different types and surface shapes of honeycomb cores, where the proposed method was proved effective in dealing with noise due to burrs and/or deformation of cell walls.

  10. Genetic Algorithm for Opto-thermal Skin Hydration Depth Profiling Measurements

    Science.gov (United States)

    Cui, Y.; Xiao, Perry; Imhof, R. E.

    2013-09-01

    Stratum corneum is the outermost skin layer, and the water content in stratum corneum plays a key role in skin cosmetic properties as well as skin barrier functions. However, to measure the water content, especially the water concentration depth profile, within stratum corneum is very difficult. Opto-thermal emission radiometry, or OTTER, is a promising technique that can be used for such measurements. In this paper, a study on stratum corneum hydration depth profiling by using a genetic algorithm (GA) is presented. The pros and cons of a GA compared against other inverse algorithms such as neural networks, maximum entropy, conjugate gradient, and singular value decomposition will be discussed first. Then, it will be shown how to use existing knowledge to optimize a GA for analyzing the opto-thermal signals. Finally, these latest GA results on hydration depth profiling of stratum corneum under different conditions, as well as on the penetration profiles of externally applied solvents, will be shown.

  11. Performance Analysis of Different Backoff Algorithms for WBAN-Based Emerging Sensor Networks.

    Science.gov (United States)

    Khan, Pervez; Ullah, Niamat; Ali, Farman; Ullah, Sana; Hong, Youn-Sik; Lee, Ki-Young; Kim, Hoon

    2017-03-02

    The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) procedure of IEEE 802.15.6 Medium Access Control (MAC) protocols for the Wireless Body Area Network (WBAN) use an Alternative Binary Exponential Backoff (ABEB) procedure. The backoff algorithm plays an important role to avoid collision in wireless networks. The Binary Exponential Backoff (BEB) algorithm used in different standards does not obtain the optimum performance due to enormous Contention Window (CW) gaps induced from packet collisions. Therefore, The IEEE 802.15.6 CSMA/CA has developed the ABEB procedure to avoid the large CW gaps upon each collision. However, the ABEB algorithm may lead to a high collision rate (as the CW size is incremented on every alternative collision) and poor utilization of the channel due to the gap between the subsequent CW. To minimize the gap between subsequent CW sizes, we adopted the Prioritized Fibonacci Backoff (PFB) procedure. This procedure leads to a smooth and gradual increase in the CW size, after each collision, which eventually decreases the waiting time, and the contending node can access the channel promptly with little delay; while ABEB leads to irregular and fluctuated CW values, which eventually increase collision and waiting time before a re-transmission attempt. We analytically approach this problem by employing a Markov chain to design the PFB scheme for the CSMA/CA procedure of the IEEE 80.15.6 standard. The performance of the PFB algorithm is compared against the ABEB function of WBAN CSMA/CA. The results show that the PFB procedure adopted for IEEE 802.15.6 CSMA/CA outperforms the ABEB procedure.

  12. Performance Analysis of Different Backoff Algorithms for WBAN-Based Emerging Sensor Networks

    Directory of Open Access Journals (Sweden)

    Pervez Khan

    2017-03-01

    Full Text Available The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA procedure of IEEE 802.15.6 Medium Access Control (MAC protocols for the Wireless Body Area Network (WBAN use an Alternative Binary Exponential Backoff (ABEB procedure. The backoff algorithm plays an important role to avoid collision in wireless networks. The Binary Exponential Backoff (BEB algorithm used in different standards does not obtain the optimum performance due to enormous Contention Window (CW gaps induced from packet collisions. Therefore, The IEEE 802.15.6 CSMA/CA has developed the ABEB procedure to avoid the large CW gaps upon each collision. However, the ABEB algorithm may lead to a high collision rate (as the CW size is incremented on every alternative collision and poor utilization of the channel due to the gap between the subsequent CW. To minimize the gap between subsequent CW sizes, we adopted the Prioritized Fibonacci Backoff (PFB procedure. This procedure leads to a smooth and gradual increase in the CW size, after each collision, which eventually decreases the waiting time, and the contending node can access the channel promptly with little delay; while ABEB leads to irregular and fluctuated CW values, which eventually increase collision and waiting time before a re-transmission attempt. We analytically approach this problem by employing a Markov chain to design the PFB scheme for the CSMA/CA procedure of the IEEE 80.15.6 standard. The performance of the PFB algorithm is compared against the ABEB function of WBAN CSMA/CA. The results show that the PFB procedure adopted for IEEE 802.15.6 CSMA/CA outperforms the ABEB procedure.

  13. Improvement of the matrix effect compensation in active neutron measurement by simulated annealing algorithm (June 2009)

    International Nuclear Information System (INIS)

    Raoux, A. C.; Loridon, J.; Mariani, A.; Passard, C.

    2009-01-01

    Active neutron measurements such as the Differential Die-Away (DDA) technique involving pulsed neutron generator, are widely applied to determine the fissile content of waste packages. Unfortunately, the main drawback of such techniques is coming from the lack of knowledge of the waste matrix composition. Thus, the matrix effect correction for the DDA measurement is an essential improvement in the field of fissile material content determination. Different solutions have been developed to compensate the effect of the matrix on the neutron measurement interpretation. In this context, this paper describes an innovative matrix correction method we have developed with the goal of increasing the accuracy of the matrix effect correction and reducing the measurement time. The implementation of this method is based on the analysis of the raw signal with an optimisation algorithm called the simulated annealing algorithm. This algorithm needs a reference data base of Multi-Channel Scaling (MCS) spectra, to fit the raw signal. The construction of the MCS library involves a learning phase to define and acquire the DDA signals. This database has been provided by a set of active signals from experimental matrices (mock-up waste drums of 118 litres) recorded in a specific device dedicated to neutron measurement research and development of the Nuclear Measurement Laboratory of CEA-Cadarache, called PROMETHEE 6. The simulated annealing algorithm is applied to make use of the effect of the matrices on the total active signal of DDA measurement. Furthermore, as this algorithm is directly applied to the raw active signal, it is very useful when active background contributions can not be easily estimated and removed. Most of the cases tested during this work which represents the feasibility phase of the method, are within a 4% agreement interval with the expected experimental value. Moreover, one can notice that without any compensation of the matrix effect, the classical DDA prompt

  14. Improvement of the matrix effect compensation in active neutron measurement by simulated annealing algorithm (June 2009)

    Energy Technology Data Exchange (ETDEWEB)

    Raoux, A. C.; Loridon, J.; Mariani, A.; Passard, C. [French Atomic Energy Commission, DEN, Cadarache, F-3108 Saint-Paul-Lez-Durance (France)

    2009-07-01

    Active neutron measurements such as the Differential Die-Away (DDA) technique involving pulsed neutron generator, are widely applied to determine the fissile content of waste packages. Unfortunately, the main drawback of such techniques is coming from the lack of knowledge of the waste matrix composition. Thus, the matrix effect correction for the DDA measurement is an essential improvement in the field of fissile material content determination. Different solutions have been developed to compensate the effect of the matrix on the neutron measurement interpretation. In this context, this paper describes an innovative matrix correction method we have developed with the goal of increasing the accuracy of the matrix effect correction and reducing the measurement time. The implementation of this method is based on the analysis of the raw signal with an optimisation algorithm called the simulated annealing algorithm. This algorithm needs a reference data base of Multi-Channel Scaling (MCS) spectra, to fit the raw signal. The construction of the MCS library involves a learning phase to define and acquire the DDA signals. This database has been provided by a set of active signals from experimental matrices (mock-up waste drums of 118 litres) recorded in a specific device dedicated to neutron measurement research and development of the Nuclear Measurement Laboratory of CEA-Cadarache, called PROMETHEE 6. The simulated annealing algorithm is applied to make use of the effect of the matrices on the total active signal of DDA measurement. Furthermore, as this algorithm is directly applied to the raw active signal, it is very useful when active background contributions can not be easily estimated and removed. Most of the cases tested during this work which represents the feasibility phase of the method, are within a 4% agreement interval with the expected experimental value. Moreover, one can notice that without any compensation of the matrix effect, the classical DDA prompt

  15. A new algorithm combining geostatistics with the surrogate data approach to increase the accuracy of comparisons of point radiation measurements with cloud measurements

    Science.gov (United States)

    Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.

    2009-04-01

    algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.

  16. Clustering for Different Scales of Measurement - the Gap-Ratio Weighted K-means Algorithm

    OpenAIRE

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    This paper describes a method for clustering data that are spread out over large regions and which dimensions are on different scales of measurement. Such an algorithm was developed to implement a robotics application consisting in sorting and storing objects in an unsupervised way. The toy dataset used to validate such application consists of Lego bricks of different shapes and colors. The uncontrolled lighting conditions together with the use of RGB color features, respectively involve data...

  17. Human performance assessment: methods and measures

    International Nuclear Information System (INIS)

    Andresen, Gisle; Droeivoldsmo, Asgeir

    2000-10-01

    The Human Error Analysis Project (HEAP) was initiated in 1994. The aim of the project was to acquire insights on how and why cognitive errors occur when operators are engaged in problem solving in advanced integrated control rooms. Since human error had not been studied in the HAlden Man-Machine LABoratory (HAMMLAB) before, it was also necessary to carry out research in methodology. In retrospect, it is clear that much of the methodological work is relevant to human-machine research in general, and not only to research on human error. The purpose of this report is, therefore, to give practitioners and researchers an overview of the methodological parts of HEAP. The scope of the report is limited to methods used throughout the data acquisition process, i.e., data-collection methods, data-refinement methods, and measurement methods. The data-collection methods include various types of verbal protocols, simulator logs, questionnaires, and interviews. Data-refinement methods involve different applications of the Eyecon system, a flexible data-refinement tool, and small computer programs used for rearranging, reformatting, and aggregating raw-data. Measurement methods involve assessment of diagnostic behaviour, erroneous actions, complexity, task/system performance, situation awareness, and workload. The report concludes that the data-collection methods are generally both reliable and efficient. The data-refinement methods, however, should be easier to use in order to facilitate explorative analyses. Although the series of experiments provided an opportunity for measurement validation, there are still uncertainties connected to several measures, due to their reliability still being unknown. (Author). 58 refs.,7 tabs

  18. Distributed control software of high-performance control-loop algorithm

    CERN Document Server

    Blanc, D

    1999-01-01

    The majority of industrial cooling and ventilation plants require the control of complex processes. All these processes are highly important for the operation of the machines. The stability and reliability of these processes are leading factors identifying the quality of the service provided. The control system architecture and software structure, as well, are required to have high dynamical performance and robust behaviour. The intelligent systems based on PID or RST controllers are used for their high level of stability and accuracy. The design and tuning of these complex controllers require the dynamic model of the plant to be known (generally obtained by identification) and the desired performance of the various control loops to be specified for achieving good performances. The concept of having a distributed control algorithm software provides full automation facilities with well-adapted functionality and good performances, giving methodology, means and tools to master the dynamic process optimization an...

  19. Performance measurement of plate fin heat exchanger by exploration: ANN, ANFIS, GA, and SA

    OpenAIRE

    A.K. Gupta; P. Kumar; R.K. Sahoo; A.K. Sahu; S.K. Sarangi

    2017-01-01

    An experimental work is conducted on counter flow plate fin compact heat exchanger using offset strip fin under different mass flow rates. The training, testing, and validation set of data has been collected by conducting experiments. Next, artificial neural network merged with Genetic Algorithm (GA) utilized to measure the performance of plate-fin compact heat exchanger. The main aim of present research is to measure the performance of plate-fin compact heat exchanger and to provide full exp...

  20. An administrative data validation study of the accuracy of algorithms for identifying rheumatoid arthritis: the influence of the reference standard on algorithm performance.

    Science.gov (United States)

    Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen

    2014-06-23

    We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.

  1. Performance measurements of hybrid PIN diode arrays

    International Nuclear Information System (INIS)

    Jernigan, J.G.; Arens, J.F.; Collins, T.; Herring, J.; Shapiro, S.L.; Wilburn, C.D.

    1990-05-01

    We report on the successful effort to develop hybrid PIN diode arrays and to demonstrate their potential as components of vertex detectors. Hybrid pixel arrays have been fabricated by the Hughes Aircraft Co. by bump bonding readout chips developed by Hughes to an array of PIN diodes manufactured by Micron Semiconductor Inc. These hybrid pixel arrays were constructed in two configurations. One array format having 10 x 64 pixels, each 120 μm square, and the other format having 256 x 256 pixels, each 30 μm square. In both cases, the thickness of the PIN diode layer is 300 μm. Measurements of detector performance show that excellent position resolution can be achieved by interpolation. By determining the centroid of the charge cloud which spreads charge into a number of neighboring pixels, a spatial resolution of a few microns has been attained. The noise has been measured to be about 300 electrons (rms) at room temperature, as expected from KTC and dark current considerations, yielding a signal-to-noise ratio of about 100 for minimum ionizing particles. 4 refs., 13 figs

  2. Comparative performance analysis of the artificial-intelligence-based thermal control algorithms for the double-skin building

    International Nuclear Information System (INIS)

    Moon, Jin Woo

    2015-01-01

    This study aimed at developing artificial-intelligence-(AI)-theory-based optimal control algorithms for improving the indoor temperature conditions and heating energy efficiency of the double-skin buildings. For this, one conventional rule-based and four AI-based algorithms were developed, including artificial neural network (ANN), fuzzy logic (FL), and adaptive neuro fuzzy inference systems (ANFIS), for operating the surface openings of the double skin and the heating system. A numerical computer simulation method incorporating the matrix laboratory (MATLAB) and the transient systems simulation (TRNSYS) software was used for the comparative performance tests. The analysis results revealed that advanced thermal-environment comfort and stability can be provided by the AI-based algorithms. In particular, the FL and ANFIS algorithms were superior to the ANN algorithm in terms of providing better thermal conditions. The ANN-based algorithm, however, proved its potential to be the most energy-efficient and stable strategy among the four AI-based algorithms. It can be concluded that the optimal algorithm can be differently determined according to the major focus of the strategy. If comfortable thermal condition is the principal interest, then the FL or ANFIS algorithm could be the proper solution, and if energy saving for space heating and system operation stability is the main concerns, then the ANN-based algorithm may be applicable. - Highlights: • Integrated control algorithms were developed for the heating system and surface openings. • AI theories were applied to the control algorithms. • ANN, FL, and ANFIS were the applied AI theories. • Comparative performance tests were conducted using computer simulation. • AI algorithms presented superior temperature environment.

  3. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    Science.gov (United States)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  4. On e-business strategy planning and performance evaluation: An adaptive algorithmic managerial approach

    Directory of Open Access Journals (Sweden)

    Alexandra Lipitakis

    2017-07-01

    Full Text Available A new e-business strategy planning and performance evaluation scheme based on adaptive algorithmic modelling techniques is presented. The effect of financial and non-financial performance of organizations on e-business strategy planning is investigated. The relationships between the four strategic planning parameters are examined, the directions of these relationships are given and six additional basic components are also considered. The new conceptual model has been constructed for e-business strategic planning and performance evaluation and an adaptive algorithmic modelling approach is presented. The new adaptive algorithmic modelling scheme including eleven dynamic modules, can be optimized and used effectively in e-business strategic planning and strategic planning evaluation of various e-services in very large organizations and businesses. A synoptic statistical analysis and comparative numerical results for the case of UK and Greece are given. The proposed e-business models indicate how e-business strategic planning may affect financial and non-financial performance in business and organizations by exploring whether models which are used for strategy planning can be applied to e-business planning and whether these models would be valid in different environments. A conceptual model has been constructed and qualitative research methods have been used for testing a predetermined number of considered hypotheses. The proposed models have been tested in the UK and Greece and the conclusions including numerical results and statistical analyses indicated existing relationships between considered dependent and independent variables. The proposed e-business models are expected to contribute to e-business strategy planning of businesses and organizations and managers should consider applying these models to their e-business strategy planning to improve their companies’ performances. This research study brings together elements of e

  5. A simplified algorithm for measuring erythrocyte deformability dispersion by laser ektacytometry

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, S Yu; Yurchuk, Yu S [Department of Physics, M.V. Lomonosov Moscow State University (Russian Federation)

    2015-08-31

    The possibility of measuring the dispersion of red blood cell deformability by laser diffractometry in shear flow (ektacytometry) is analysed theoretically. A diffraction pattern parameter is found, which is sensitive to the dispersion of erythrocyte deformability and to a lesser extent – to such parameters as the level of the scattered light intensity, the shape of red blood cells, the concentration of red blood cells in the suspension, the geometric dimensions of the experimental setup, etc. A new algorithm is proposed for measuring erythrocyte deformability dispersion by using data of laser ektacytometry. (laser applications in medicine)

  6. Near Zero Energy House (NZEH) Design Optimization to Improve Life Cycle Cost Performance Using Genetic Algorithm

    Science.gov (United States)

    Latief, Y.; Berawi, M. A.; Koesalamwardi, A. B.; Supriadi, L. S. R.

    2018-03-01

    Near Zero Energy House (NZEH) is a housing building that provides energy efficiency by using renewable energy technologies and passive house design. Currently, the costs for NZEH are quite expensive due to the high costs of the equipment and materials for solar panel, insulation, fenestration and other renewable energy technology. Therefore, a study to obtain the optimum design of a NZEH is necessary. The aim of the optimum design is achieving an economical life cycle cost performance of the NZEH. One of the optimization methods that could be utilized is Genetic Algorithm. It provides the method to obtain the optimum design based on the combinations of NZEH variable designs. This paper discusses the study to identify the optimum design of a NZEH that provides an optimum life cycle cost performance using Genetic Algorithm. In this study, an experiment through extensive design simulations of a one-level house model was conducted. As a result, the study provide the optimum design from combinations of NZEH variable designs, which are building orientation, window to wall ratio, and glazing types that would maximize the energy generated by photovoltaic panel. Hence, the design would support an optimum life cycle cost performance of the house.

  7. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    Science.gov (United States)

    2017-01-05

    vol. 74, pp. 279–295, 1999. [11] M. Fröhlich, D. Michaelis, and H. W. Strube, “SIM— simultaneous inverse filtering and matching of a glottal flow...1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to

  8. Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence Performance

    Directory of Open Access Journals (Sweden)

    Narinder Singh

    2017-01-01

    Full Text Available A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO and Grey Wolf Optimizer (GWO. The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

  9. Performance assessment of electric power generations using an adaptive neural network algorithm and fuzzy DEA

    Energy Technology Data Exchange (ETDEWEB)

    Javaheri, Zahra

    2010-09-15

    Modeling, evaluating and analyzing performance of Iranian thermal power plants is the main goal of this study which is based on multi variant methods analysis. These methods include fuzzy DEA and adaptive neural network algorithm. At first, we determine indicators, then data is collected, next we obtained values of ranking and efficiency by Fuzzy DEA, Case study is thermal power plants In view of the fact that investment to establish on power plant is very high, and maintenance of power plant causes an expensive expenditure, moreover using fossil fuel effected environment hence optimum produce of current power plants is important.

  10. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  11. Inertial measurement unit–based iterative pose compensation algorithm for low-cost modular manipulator

    Directory of Open Access Journals (Sweden)

    Yunhan Lin

    2016-01-01

    Full Text Available It is a necessary mean to realize the accurate motion control of the manipulator which uses end-effector pose correction method and compensation method. In this article, first, we established the kinematic model and error model of the modular manipulator (WUST-ARM, and then we discussed the measurement methods and precision of the inertial measurement unit sensor. The inertial measurement unit sensor is mounted on the end-effector of modular manipulator, to get the real-time pose of the end-effector. At last, a new inertial measurement unit–based iterative pose compensation algorithm is proposed. By applying this algorithm in the pose compensation experiment of modular manipulator which is composed of low-cost rotation joints, the results show that the inertial measurement unit can obtain a higher precision when in static state; it will accurately feedback to the control system with an accurate error compensation angle after a brief delay when the end-effector moves to the target point, and after compensation, the precision errors of roll angle, pitch angle, and yaw angle are reached at 0.05°, 0.01°, and 0.27° respectively. It proves that this low-cost method provides a new solution to improve the end-effector pose of low-cost modular manipulator.

  12. Simulation of 4-turn algorithms for reconstructing lattice optic functions from orbit measurements

    International Nuclear Information System (INIS)

    Koscielniak, S.; Iliev, A.

    1994-06-01

    We describe algorithms for reconstructing tune, closed-orbit, beta-function and phase advance from four individual turns of beam orbit acquisition data, under the assumption of coherent, almost linear and uncoupled betatron oscillations. To estimate the beta-function at, and phase advance between, position monitors, we require at least one anchor location consisting of two monitors separated by a drift. The algorithms were submitted to a Monte Carlo analysis to find the likely measurement accuracy of the optics functions in the KAON Factory Booster ring racetrack lattice, assuming beam position monitors with surveying and reading errors, and assuming an imperfect lattice with gradient and surveying errors. Some of the results of this study are reported. (author)

  13. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  14. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    Science.gov (United States)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  15. A Novel Supervisory Control Algorithm to Improve the Performance of a Real-Time PV Power-Hardware-In-Loop Simulator with Non-RTDS

    Directory of Open Access Journals (Sweden)

    Dae-Jin Kim

    2017-10-01

    Full Text Available A programmable direct current (DC power supply with Real-time Digital Simulator (RTDS-based photovoltaic (PV Power Hardware-In-the-Loop (PHIL simulators has been used to improve the control algorithm and reliability of a PV inverter. This paper proposes a supervisory control algorithm for a PV PHIL simulator with a non-RTDS device that is an alternative solution to a high-cost PHIL simulator. However, when such a simulator with the conventional algorithm which is used in an RTDS is connected to a PV inverter, the output is in the transient state and it makes it impossible to evaluate the performance of the PV inverter. Therefore, the proposed algorithm controls the voltage and current target values according to constant voltage (CV and constant current (CC modes to overcome the limitation of the Computing Unit and DC power supply, and it also uses a multi-rate system to account for the characteristics of each component of the simulator. A mathematical model of a PV system, programmable DC power supply, isolated DC measurement device, and Computing Unit are integrated to form a real-time processing simulator. Performance tests are carried out with a commercial PV inverter and prove the superiority of this proposed algorithm against the conventional algorithm.

  16. Parallel performance of TORT on the CRAY J90: Model and measurement

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of the code

  17. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  18. Recursive Pyramid Algorithm-Based Discrete Wavelet Transform for Reactive Power Measurement in Smart Meters

    Directory of Open Access Journals (Sweden)

    Mahin K. Atiq

    2013-09-01

    Full Text Available Measurement of the active, reactive, and apparent power is one of the most fundamental tasks of smart meters in energy systems. Recently, a number of studies have employed the discrete wavelet transform (DWT for power measurement in smart meters. The most common way to implement DWT is the pyramid algorithm; however, this is not feasible for practical DWT computation because it requires either a log N cascaded filter or O (N word size memory storage for an input signal of the N-point. Both solutions are too expensive for practical applications of smart meters. It is proposed that the recursive pyramid algorithm is more suitable for smart meter implementation because it requires only word size storage of L × Log (N-L, where L is the length of filter. We also investigated the effect of varying different system parameters, such as the sampling rate, dc offset, phase offset, linearity error in current and voltage sensors, analog to digital converter resolution, and number of harmonics in a non-sinusoidal system, on the reactive energy measurement using DWT. The error analysis is depicted in the form of the absolute difference between the measured and the true value of the reactive energy.

  19. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements

    CERN Document Server

    Arpaia, P; Inglese, V

    2010-01-01

    A real-time algorithm of data reduction, based on the combination a two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement syste...

  20. A two-domain real-time algorithm for optimal data reduction: a case study on accelerator magnet measurements

    International Nuclear Information System (INIS)

    Arpaia, Pasquale; Buzio, Marco; Inglese, Vitaliano

    2010-01-01

    A real-time algorithm of data reduction, based on the combination of two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by the post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement systems for testing magnets of the Large Hadron Collider at the European Organization for Nuclear Research (CERN) are reported

  1. 45 CFR 305.40 - Penalty performance measures and levels.

    Science.gov (United States)

    2010-10-01

    ... HUMAN SERVICES PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.40 Penalty performance measures and levels. (a) There are three performance measures for which States must... 45 Public Welfare 2 2010-10-01 2010-10-01 false Penalty performance measures and levels. 305.40...

  2. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  3. Performance Evaluation of Block Acquisition and Tracking Algorithms Using an Open Source GPS Receiver Platform

    Science.gov (United States)

    Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.

    2011-01-01

    Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.

  4. Performance quantification of clustering algorithms for false positive removal in fMRI by ROC curves

    Directory of Open Access Journals (Sweden)

    André Salles Cunha Peres

    Full Text Available Abstract Introduction Functional magnetic resonance imaging (fMRI is a non-invasive technique that allows the detection of specific cerebral functions in humans based on hemodynamic changes. The contrast changes are about 5%, making visual inspection impossible. Thus, statistic strategies are applied to infer which brain region is engaged in a task. However, the traditional methods like general linear model and cross-correlation utilize voxel-wise calculation, introducing a lot of false-positive data. So, in this work we tested post-processing cluster algorithms to diminish the false-positives. Methods In this study, three clustering algorithms (the hierarchical cluster, k-means and self-organizing maps were tested and compared for false-positive removal in the post-processing of cross-correlation analyses. Results Our results showed that the hierarchical cluster presented the best performance to remove the false positives in fMRI, being 2.3 times more accurate than k-means, and 1.9 times more accurate than self-organizing maps. Conclusion The hierarchical cluster presented the best performance in false-positive removal because it uses the inconsistency coefficient threshold, while k-means and self-organizing maps utilize a priori cluster number (centroids and neurons number; thus, the hierarchical cluster avoids clustering scattered voxels, as the inconsistency coefficient threshold allows only the voxels to be clustered that are at a minimum distance to some cluster.

  5. Improvement of Networked Control Systems Performance Using a New Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Seyed Ali Mesbahifard

    2014-07-01

    Full Text Available Networked control systems are control systems which controllers and plants are connected via telecommunication network. One of the most important challenges in networked control systems is the problem of network time delay. Increasing of time delay may affect on control system performance extremely. Other important issue in networked control systems is the security problems. Since it is possible that unknown people access to network especially Internet, the probability of terrible attacks such as deception attacks is greater, therefore presentation of methods which could decrease time delay and increase system immunity are desired. In this paper a symmetric encryption with low data volume against deception attacks is proposed. This method has high security and low time delay rather than the other encryption algorithms and could improve the control system performance against deception attacks.

  6. Measurement of quartic boson couplings at the international linear collider and study of novel particle flow algorithms

    International Nuclear Information System (INIS)

    Krstonosic, P.

    2008-02-01

    In the absence of the Standard Model Higgs boson the interaction among the gauge bosons becomes strong at high energies (∼1 TeV) and influences couplings between them. Trilinear and quartic gauge boson vertices are characterized by set of couplings that are expected to deviate from Standard Model at energies significantly lower then the energy scale of New Physics. Estimation of the precision with which we can measure quartic couplings at International Linear Collider (ILC) is one of two topics covered by this theses. There are several measurement scenarios for quartic couplings. One that we have chosen is weak boson scattering. Since taking of the real data is, unfortunately, still far in the future running options for the machine were also investigated with their impact on the results. Analysis was done in model independent way and precision limits were extracted. Interpretation of the results in terms of possible scenarios beyond Standard Model is then performed by combining accumulated knowledge about all signal processes. One of the key requirements for achieving the results of the measurement in the form that is presented is to reach the detector performance goals. This is possible only with ''Particle Flow'' reconstruction approach. Performance limit of such approach and various contribution to it is discussed in detail. Novel reconstruction algorithm for photon reconstruction is developed, and performance comparison of such concept with more traditional approaches is done. (orig.)

  7. Measurement of quartic boson couplings at the international linear collider and study of novel particle flow algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Krstonosic, P.

    2008-02-15

    In the absence of the Standard Model Higgs boson the interaction among the gauge bosons becomes strong at high energies ({approx}1 TeV) and influences couplings between them. Trilinear and quartic gauge boson vertices are characterized by set of couplings that are expected to deviate from Standard Model at energies significantly lower then the energy scale of New Physics. Estimation of the precision with which we can measure quartic couplings at International Linear Collider (ILC) is one of two topics covered by this theses. There are several measurement scenarios for quartic couplings. One that we have chosen is weak boson scattering. Since taking of the real data is, unfortunately, still far in the future running options for the machine were also investigated with their impact on the results. Analysis was done in model independent way and precision limits were extracted. Interpretation of the results in terms of possible scenarios beyond Standard Model is then performed by combining accumulated knowledge about all signal processes. One of the key requirements for achieving the results of the measurement in the form that is presented is to reach the detector performance goals. This is possible only with ''Particle Flow'' reconstruction approach. Performance limit of such approach and various contribution to it is discussed in detail. Novel reconstruction algorithm for photon reconstruction is developed, and performance comparison of such concept with more traditional approaches is done. (orig.)

  8. Finding the magnetic size distribution of magnetic nanoparticles from magnetization measurements via the iterative Kaczmarz algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Daniel, E-mail: frank.wiekhorst@ptb.de; Eberbeck, Dietmar; Steinhoff, Uwe; Wiekhorst, Frank

    2017-06-01

    The characterization of the size distribution of magnetic nanoparticles is an important step for the evaluation of their suitability for many different applications like magnetic hyperthermia, drug targeting or Magnetic Particle Imaging. We present a new method based on the iterative Kaczmarz algorithm that enables the reconstruction of the size distribution from magnetization measurements without a priori knowledge of the distribution form. We show in simulations that the method is capable of very exact reconstructions of a given size distribution and, in that, is highly robust to noise contamination. Moreover, we applied the method on the well characterized FeraSpin™ series and obtained results that were in accordance with literature and boundary conditions based on their synthesis via separation of the original suspension FeraSpin R. It is therefore concluded that this method is a powerful and intuitive tool for reconstructing particle size distributions from magnetization measurements. - Highlights: • A new method for the size distribution fit of magnetic nanoparticles is proposed. • Employed Kaczmarz algorithm does not need a priori input or eigenwert regularization. • The method is highly robust to noise contamination. • Size distributions are reconstructed from simulated and measured magnetization curves.

  9. A Network-Based Algorithm for Clustering Multivariate Repeated Measures Data

    Science.gov (United States)

    Koslovsky, Matthew; Arellano, John; Schaefer, Caroline; Feiveson, Alan; Young, Millennia; Lee, Stuart

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Astronaut Corps is a unique occupational cohort for which vast amounts of measures data have been collected repeatedly in research or operational studies pre-, in-, and post-flight, as well as during multiple clinical care visits. In exploratory analyses aimed at generating hypotheses regarding physiological changes associated with spaceflight exposure, such as impaired vision, it is of interest to identify anomalies and trends across these expansive datasets. Multivariate clustering algorithms for repeated measures data may help parse the data to identify homogeneous groups of astronauts that have higher risks for a particular physiological change. However, available clustering methods may not be able to accommodate the complex data structures found in NASA data, since the methods often rely on strict model assumptions, require equally-spaced and balanced assessment times, cannot accommodate missing data or differing time scales across variables, and cannot process continuous and discrete data simultaneously. To fill this gap, we propose a network-based, multivariate clustering algorithm for repeated measures data that can be tailored to fit various research settings. Using simulated data, we demonstrate how our method can be used to identify patterns in complex data structures found in practice.

  10. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    Science.gov (United States)

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. The performance of phylogenetic algorithms in estimating haplotype genealogies with migration.

    Science.gov (United States)

    Salzburger, Walter; Ewing, Greg B; Von Haeseler, Arndt

    2011-05-01

    Genealogies estimated from haplotypic genetic data play a prominent role in various biological disciplines in general and in phylogenetics, population genetics and phylogeography in particular. Several software packages have specifically been developed for the purpose of reconstructing genealogies from closely related, and hence, highly similar haplotype sequence data. Here, we use simulated data sets to test the performance of traditional phylogenetic algorithms, neighbour-joining, maximum parsimony and maximum likelihood in estimating genealogies from nonrecombining haplotypic genetic data. We demonstrate that these methods are suitable for constructing genealogies from sets of closely related DNA sequences with or without migration. As genealogies based on phylogenetic reconstructions are fully resolved, but not necessarily bifurcating, and without reticulations, these approaches outperform widespread 'network' constructing methods. In our simulations of coalescent scenarios involving panmictic, symmetric and asymmetric migration, we found that phylogenetic reconstruction methods performed well, while the statistical parsimony approach as implemented in TCS performed poorly. Overall, parsimony as implemented in the PHYLIP package performed slightly better than other methods. We further point out that we are not making the case that widespread 'network' constructing methods are bad, but that traditional phylogenetic tree finding methods are applicable to haplotypic data and exhibit reasonable performance with respect to accuracy and robustness. We also discuss some of the problems of converting a tree to a haplotype genealogy, in particular that it is nonunique. © 2011 Blackwell Publishing Ltd.

  12. High Job Performance Through Co-Developing Performance Measures With Employees

    NARCIS (Netherlands)

    Groen, Bianca A.C.; Wilderom, Celeste P.M.; Wouters, Marc

    2017-01-01

    According to various studies, employee participation in the development of performance measures can increase job performance. This study focuses on how this job performance elevation occurs. We hypothesize that when employees have participated in the development of performance measures, they

  13. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Grootjans, Willem [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology; Meeuwis, Antoi P.W.; Gotthardt, Martin; Visser, Eric P. [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Slump, Cornelis H. [Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Geus-Oei, Lioe-Fee de [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology

    2016-07-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17 mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4

  14. CONSIDERATIONS ON MEASURING PERFORMANCE AND MARKET STRUCTURE

    OpenAIRE

    Spiridon Cosmin Alexandru

    2011-01-01

    According to neoclassical theory, the relationship between the price, respectively of marginal cost and market structures, the methods for determining the performance of a firm or of an industry, deviate from the model of perfect competition. Assessing performance involves performing comparisons, reporting that their reference level can be a standard value, or a statistical value which can be a national-regional average, a homogeneous group, or an average value at a market level. Modern theor...

  15. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    Science.gov (United States)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  16. Measuring individual work performance: Identifying and selecting indicators

    NARCIS (Netherlands)

    Koopmans, L.; Bernaards, C.M.; Hildebrandt, V.H.; de Vet, H.C.W.; van der Beek, A.J.

    2014-01-01

    BACKGROUND: Theoretically, individual work performance (IWP) can be divided into four dimensions: task performance, contextual performance, adaptive performance, and counterproductive work behavior. However, there is no consensus on the indicators used to measure these dimensions.

  17. Measuring individual work performance: identifying and selecting indicators

    NARCIS (Netherlands)

    Koopmans, L.; Bernaards, C.M.; Hildebrandt, V.H.; Vet, H.C de; Beek, A.J. van der

    2014-01-01

    BACKGROUND: Theoretically, individual work performance (IWP) can be divided into four dimensions: task performance, contextual performance, adaptive performance, and counterproductive work behavior. However, there is no consensus on the indicators used to measure these dimensions. OBJECTIVE: This

  18. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  19. Measuring Student Performance in General Organic Chemistry

    Science.gov (United States)

    Austin, Ara C.; Ben-Daat, Hagit; Zhu, Mary; Atkinson, Robert; Barrows, Nathan; Gould, Ian R.

    2015-01-01

    Student performance in general organic chemistry courses is determined by a wide range of factors including cognitive ability, motivation and cultural capital. Previous work on cognitive factors has tended to focus on specific areas rather than exploring performance across all problem types and cognitive skills. In this study, we have categorized…

  20. The application of cat swarm optimisation algorithm in classifying small loan performance

    Science.gov (United States)

    Kencana, Eka N.; Kiswanti, Nyoman; Sari, Kartika

    2017-10-01

    It is common for banking system to analyse the feasibility of credit application before its approval. Although this process has been carefully done, there is no warranty that all credits will be repaid smoothly. This study aimed to know the accuracy of Cat Swarm Optimisation (CSO) algorithm in classifying small loans’ performance that is approved by Bank Rakyat Indonesia (BRI), one of several public banks in Indonesia. Data collected from 200 lenders were used in this work. The data matrix consists of 9 independent variables that represent profile of the credit, and one categorical dependent variable reflects credit’s performance. Prior to the analyses, data was divided into two data subset with equal size. Ordinal logistic regression (OLR) procedure is applied for the first subset and gave 3 out of 9 independent variables i.e. the amount of credit, credit’s period, and income per month of lender proved significantly affect credit performance. By using significantly estimated parameters from OLR procedure as the initial values for observations at the second subset, CSO procedure started. This procedure gave 76 percent of classification accuracy of credit performance, slightly better compared to 64 percent resulted from OLR procedure.