WorldWideScience

Sample records for model matching techniques

  1. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  2. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  3. Object matching using a locally affine invariant and linear programming techniques.

    Science.gov (United States)

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  4. Techniques Used in String Matching for Network Security

    OpenAIRE

    Jamuna Bhandari

    2014-01-01

    String matching also known as pattern matching is one of primary concept for network security. In this area the effectiveness and efficiency of string matching algorithms is important for applications in network security such as network intrusion detection, virus detection, signature matching and web content filtering system. This paper presents brief review on some of string matching techniques used for network security.

  5. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  6. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  7. Technique to match mantle and para-aortic fields

    International Nuclear Information System (INIS)

    Lutz, W.R.; Larsen, R.D.

    1983-01-01

    A technique is described to match the mantle and para-aortic fields used in treatment of Hodgkin's disease, when the patient is treated alternately in supine and prone position. The approach is based on referencing the field edges to a point close to the vertebral column, where uncontrolled motion is minimal and where accurate matching is particularly important. Fiducial surface points are established in the simulation process to accomplish the objective. Dose distributions have been measured to study the combined effect of divergence differences, changes in body angulation and setup errors. Even with the most careful technique, the use of small cord blocks of 50% transmission is an advisable precaution for the posterior fields

  8. Electron/photon matched field technique for treatment of orbital disease

    International Nuclear Information System (INIS)

    Arthur, Douglas W.; Zwicker, Robert D.; Garmon, Pamela W.; Huang, David T.; Schmidt-Ullrich, Rupert K.

    1997-01-01

    Purpose: A number of approaches have been described in the literature for irradiation of malignant and benign diseases of the orbit. Techniques described to date do not deliver a homogeneous dose to the orbital contents while sparing the cornea and lens of excessive dose. This is a result of the geometry encountered in this region and the fact that the target volume, which includes the periorbital and retroorbital tissues but excludes the cornea, anterior chamber, and lens, cannot be readily accommodated by photon beams alone. To improve the dose distribution for these treatments, we have developed a technique that combines a low-energy electron field carefully matched with modified photon fields to achieve acceptable dose coverage and uniformity. Methods and Materials: An anterior electron field and a lateral photon field setup is used to encompass the target volume. Modification of these fields permits accurate matching as well as conformation of the dose distribution to the orbit. A flat-surfaced wax compensator assures uniform electron penetration across the field, and a sunken lead alloy eye block prevents excessive dose to the central structures of the anterior segment. The anterior edge of the photon field is modified by broadening the penumbra using a form of pseudodynamic collimation. Direct measurements using film and ion chamber dosimetry were used to study the characteristics of the fall-off region of the electron field and the penumbra of the photon fields. >From the data collected, the technique for accurate field matching and dose uniformity was generated. Results: The isodose curves produced with this treatment technique demonstrate homogeneous dose coverage of the orbit, including the paralenticular region, and sufficient dose sparing of the anterior segment. The posterior lens accumulates less than 40% of the prescribed dose, and the lateral aspect of the lens receives less than 30%. A dose variation in the match region of ±12% is confronted when

  9. An Efficient Metric of Automatic Weight Generation for Properties in Instance Matching Technique

    OpenAIRE

    Seddiqui, Md. Hanif; Nath, Rudra Pratap Deb; Aono, Masaki

    2015-01-01

    The proliferation of heterogeneous data sources of semantic knowledge base intensifies the need of an automatic instance matching technique. However, the efficiency of instance matching is often influenced by the weight of a property associated to instances. Automatic weight generation is a non-trivial, however an important task in instance matching technique. Therefore, identifying an appropriate metric for generating weight for a property automatically is nevertheless a formidab...

  10. The application of computer color matching techniques to the matching of target colors in a food substrate: a first step in the development of foods with customized appearance.

    Science.gov (United States)

    Kim, Sandra; Golding, Matt; Archer, Richard H

    2012-06-01

    A predictive color matching model based on the colorimetric technique was developed and used to calculate the concentrations of primary food dyes needed in a model food substrate to match a set of standard tile colors. This research is the first stage in the development of novel three-dimensional (3D) foods in which color images or designs can be rapidly reproduced in 3D form. Absorption coefficients were derived for each dye, from a concentration series in the model substrate, a microwave-baked cake. When used in a linear, additive blending model these coefficients were able to predict cake color from selected dye blends to within 3 ΔE*(ab,10) color difference units, or within the limit of a visually acceptable match. Absorption coefficients were converted to pseudo X₁₀, Y₁₀, and Z₁₀ tri-stimulus values (X₁₀(P), Y₁₀(P), Z₁₀(P)) for colorimetric matching. The Allen algorithm was used to calculate dye concentrations to match the X₁₀(P), Y₁₀(P), and Z₁₀(P) values of each tile color. Several recipes for each color were computed with the tile specular component included or excluded, and tested in the cake. Some tile colors proved out-of-gamut, limited by legal dye concentrations; these were scaled to within legal range. Actual differences suggest reasonable visual matches could be achieved for within-gamut tile colors. The Allen algorithm, with appropriate adjustments of concentration outputs, could provide a sufficiently rapid and accurate calculation tool for 3D color food printing. The predictive color matching approach shows potential for use in a novel embodiment of 3D food printing in which a color image or design could be rendered within a food matrix through the selective blending of primary dyes to reproduce each color element. The on-demand nature of this food application requires rapid color outputs which could be provided by the color matching technique, currently used in nonfood industries, rather than by empirical food

  11. Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2015-01-01

    Full Text Available This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

  12. Role model and prototype matching

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    ’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype...

  13. Enhanced Map-Matching Algorithm with a Hidden Markov Model for Mobile Phone Positioning

    Directory of Open Access Journals (Sweden)

    An Luo

    2017-10-01

    Full Text Available Numerous map-matching techniques have been developed to improve positioning, using Global Positioning System (GPS data and other sensors. However, most existing map-matching algorithms process GPS data with high sampling rates, to achieve a higher correct rate and strong universality. This paper introduces a novel map-matching algorithm based on a hidden Markov model (HMM for GPS positioning and mobile phone positioning with a low sampling rate. The HMM is a statistical model well known for providing solutions to temporal recognition applications such as text and speech recognition. In this work, the hidden Markov chain model was built to establish a map-matching process, using the geometric data, the topologies matrix of road links in road network and refined quad-tree data structure. HMM-based map-matching exploits the Viterbi algorithm to find the optimized road link sequence. The sequence consists of hidden states in the HMM model. The HMM-based map-matching algorithm is validated on a vehicle trajectory using GPS and mobile phone data. The results show a significant improvement in mobile phone positioning and high and low sampling of GPS data.

  14. A dynamic supraclavicular field-matching technique for head-and-neck cancer patients treated with IMRT

    International Nuclear Information System (INIS)

    Duan, Jun; Shen Sui; Spencer, Sharon A.; Ahmed, Raef S.; Popple, Richard A.; Ye, Sung-Joon; Brezovich, Ivan A.

    2004-01-01

    Purpose: The conventional single-isocenter and half-beam (SIHB) technique for matching supraclavicular fields with head-and-neck (HN) intensity-modulated radiotherapy (IMRT) fields is subject to substantial dose inhomogeneities from imperfect accelerator jaw/MLC calibration. It also limits the isocenter location and restricts the useful field size for IMRT. We propose a dynamic field-matching technique to overcome these limitations. Methods and materials: The proposed dynamic field-matching technique makes use of wedge junctions for the abutment of supraclavicular and HN IMRT fields. The supraclavicular field was shaped with a multileaf collimator (MLC), which was orientated such that the leaves traveled along the superoinferior direction. The leaves that defined the superior field border moved continuously during treatment from 1.5 cm below to 1.5 cm above the conventional match line to generate a 3-cm-wide wedge-shaped junction. The HN IMRT fields were optimized by taking into account the dose contribution from the supraclavicular field to the junction area, which generates a complementary wedge to produce a smooth junction in the abutment region. This technique was evaluated on a polystyrene phantom and 10 HN cancer patients. Treatment plans were generated for the phantom and the 10 patients. Dose profiles across the abutment region were measured in the phantom on films. For patient plans, dose profiles that passed through the center of the neck lymph nodes were calculated using the proposed technique and the SIHB technique, and dose uniformity in the abutment region was compared. Field mismatches of ± 1 mm and ± 2 mm because of imperfect jaw/MLC calibration were simulated, and the resulting dose inhomogeneities were studied for the two techniques with film measurements and patient plans. Three-dimensional volumetric doses were analyzed, and equivalent uniform doses (EUD) were computed. The effect of field mismatches on EUD was compared for the two match

  15. An improved perfectly matched layer in the eigenmode expansion technique

    DEFF Research Database (Denmark)

    Gregersen, Niels; Mørk, Jesper

    2008-01-01

    When employing the eigenmode expansion technique (EET), parasitic reflections at the boundary of the computational domain can be suppressed by introducing a perfectly matched layer (PML). However, the traditional PML, suffers from an artificial field divergence limiting its usefulness. We propose...

  16. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form....... By regarding the nominal control system as the desired model, an augmented control system is constructed through the model-matching formulation, such that the current robust control techniques can be usedto synthesize these dynamical modules. One extension of this method with respect to the performance...... recovery besides the functionality recovery is also discussed under this framework. Comparing with the conventional control mixer method, the proposed method considers the recon gured system's stability, performance and robustness simultaneously. Finally, the proposed method is illustrated by a case study...

  17. IMPROVED TOPOGRAPHIC MODELS VIA CONCURRENT AIRBORNE LIDAR AND DENSE IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    G. Mandlburger

    2017-09-01

    Full Text Available Modern airborne sensors integrate laser scanners and digital cameras for capturing topographic data at high spatial resolution. The capability of penetrating vegetation through small openings in the foliage and the high ranging precision in the cm range have made airborne LiDAR the prime terrain acquisition technique. In the recent years dense image matching evolved rapidly and outperforms laser scanning meanwhile in terms of the achievable spatial resolution of the derived surface models. In our contribution we analyze the inherent properties and review the typical processing chains of both acquisition techniques. In addition, we present potential synergies of jointly processing image and laser data with emphasis on sensor orientation and point cloud fusion for digital surface model derivation. Test data were concurrently acquired with the RIEGL LMS-Q1560 sensor over the city of Melk, Austria, in January 2016 and served as basis for testing innovative processing strategies. We demonstrate that (i systematic effects in the resulting scanned and matched 3D point clouds can be minimized based on a hybrid orientation procedure, (ii systematic differences of the individual point clouds are observable at penetrable, vegetated surfaces due to the different measurement principles, and (iii improved digital surface models can be derived combining the higher density of the matching point cloud and the higher reliability of LiDAR point clouds, especially in the narrow alleys and courtyards of the study site, a medieval city.

  18. Stability analysis of resistive MHD modes via a new numerical matching technique

    International Nuclear Information System (INIS)

    Furukawa, M.; Tokuda, S.; Zheng, L.-J.

    2009-01-01

    Full text: Asymptotic matching technique is one of the principal methods for calculating linear stability of resistive magnetohydrodynamics (MHD) modes such as tearing modes. In applying the asymptotic method, the plasma region is divided into two regions: a thin inner layer around the mode-resonant surface and ideal MHD regions except for the layer. If we try to solve this asymptotic matching problem numerically, we meet practical difficulties. Firstly, the inertia-less ideal MHD equation or the Newcomb equation has a regular singular point at the mode-resonant surface, leading to the so-called big and small solutions. Since the big solution is not square-integrable, it needs sophisticated treatment. Even if such a treatment is applied, the matching data or the ratio of small solution to the big one, has been revealed to be sensitive to local MHD equilibrium accuracy and grid structure at the mode-resonant surface by numerical experiments. Secondly, one of the independent solutions in the inner layer, which should be matched onto the ideal MHD solution, is not square-integrable. The response formalism has been adopted to resolve this problem. In the present paper, we propose a new method for computing the linear stability of resistive MHD modes via matching technique, where the plasma region is divided into ideal MHD regions and an inner region with finite width. The matching technique using an inner region with finite width was recently developed for ideal MHD modes in cylindrical geometry, and good performance was shown. Our method extends this idea to resistive MHD modes. In the inner region, the low-beta reduced MHD equations are solved, and the solution is matched onto the solution of the Newcomb equation by using boundary conditions such that the parallel electric field vanishes properly as approaching the computational boundaries. If we use the inner region with finite width, the practical difficulties raised above can be avoided from the beginning. Figure

  19. Modelling relationships between match events and match outcome in elite football.

    Science.gov (United States)

    Liu, Hongyou; Hopkins, Will G; Gómez, Miguel-Angel

    2016-08-01

    Identifying match events that are related to match outcome is an important task in football match analysis. Here we have used generalised mixed linear modelling to determine relationships of 16 football match events and 1 contextual variable (game location: home/away) with the match outcome. Statistics of 320 close matches (goal difference ≤ 2) of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Relationships were evaluated with magnitude-based inferences and were expressed as extra matches won or lost per 10 close matches for an increase of two within-team or between-team standard deviations (SD) of the match event (representing effects of changes in team values from match to match and of differences between average team values, respectively). There was a moderate positive within-team effect from shots on target (3.4 extra wins per 10 matches; 99% confidence limits ±1.0), and a small positive within-team effect from total shots (1.7 extra wins; ±1.0). Effects of most other match events were related to ball possession, which had a small negative within-team effect (1.2 extra losses; ±1.0) but a small positive between-team effect (1.7 extra wins; ±1.4). Game location showed a small positive within-team effect (1.9 extra wins; ±0.9). In analyses of nine combinations of team and opposition end-of-season rank (classified as high, medium, low), almost all between-team effects were unclear, while within-team effects varied depending on the strength of team and opposition. Some of these findings will be useful to coaches and performance analysts when planning training sessions and match tactics.

  20. A review of the Match technique as applied to AASE-2/EASOE and SOLVE/THESEO 2000

    Directory of Open Access Journals (Sweden)

    G. A. Morris

    2005-01-01

    Full Text Available We apply the NASA Goddard Trajectory Model to data from a series of ozonesondes to derive ozone loss rates in the lower stratosphere for the AASE-2/EASOE mission (January-March 1992 and for the SOLVE/THESEO 2000 mission (January-March 2000 in an approach similar to Match. Ozone loss rates are computed by comparing the ozone concentrations provided by ozonesondes launched at the beginning and end of the trajectories connecting the launches. We investigate the sensitivity of the Match results to the various parameters used to reject potential matches in the original Match technique. While these filters effectively eliminate from consideration 80% of the matched sonde pairs and >99% of matched observations in our study, we conclude that only a filter based on potential vorticity changes along the calculated back trajectories seems warranted. Our study also demonstrates that the ozone loss rates estimated in Match can vary by up to a factor of two depending upon the precise trajectory paths calculated for each trajectory. As a result, the statistical uncertainties published with previous Match results might need to be augmented by an additional systematic error. The sensitivity to the trajectory path is particularly pronounced in the month of January, for which the largest ozone loss rate discrepancies between photochemical models and Match are found. For most of the two study periods, our ozone loss rates agree with those previously published. Notable exceptions are found for January 1992 at 475K and late February/early March 2000 at 450K, both periods during which we generally find smaller loss rates than the previous Match studies. Integrated ozone loss rates estimated by Match in both of those years compare well with those found in numerous other studies and in a potential vorticity/potential temperature approach shown previously and in this paper. Finally, we suggest an alternate approach to Match using trajectory mapping. This approach uses

  1. Weed identification using an automated active shape matching (AASM) technique

    DEFF Research Database (Denmark)

    C. Swain, Kishore; Nørremark, Michael; Jørgensen, Rasmus Nyholm

    2011-01-01

    Weed identification and control is a challenge for intercultural operations in agriculture. As an alternative to chemical pest control, a smart weed identification technique followed by mechanical weed control system could be developed. The proposed smart identification technique works on the con......Weed identification and control is a challenge for intercultural operations in agriculture. As an alternative to chemical pest control, a smart weed identification technique followed by mechanical weed control system could be developed. The proposed smart identification technique works...... on the concept of ‘active shape modelling’ to identify weed and crop plants based on their morphology. The automated active shape matching system (AASM) technique consisted of, i) a Pixelink camera ii) an LTI (Lehrstuhlfuer technische informatik) image processing library, iii) a laptop pc with the Linux OS. A 2...

  2. Matched-Filter Thermography

    Directory of Open Access Journals (Sweden)

    Nima Tabatabaei

    2018-04-01

    Full Text Available Conventional infrared thermography techniques, including pulsed and lock-in thermography, have shown great potential for non-destructive evaluation of broad spectrum of materials, spanning from metals to polymers to biological tissues. However, performance of these techniques is often limited due to the diffuse nature of thermal wave fields, resulting in an inherent compromise between inspection depth and depth resolution. Recently, matched-filter thermography has been introduced as a means for overcoming this classic limitation to enable depth-resolved subsurface thermal imaging and improving axial/depth resolution. This paper reviews the basic principles and experimental results of matched-filter thermography: first, mathematical and signal processing concepts related to matched-fileting and pulse compression are discussed. Next, theoretical modeling of thermal-wave responses to matched-filter thermography using two categories of pulse compression techniques (linear frequency modulation and binary phase coding are reviewed. Key experimental results from literature demonstrating the maintenance of axial resolution while inspecting deep into opaque and turbid media are also presented and discussed. Finally, the concept of thermal coherence tomography for deconvolution of thermal responses of axially superposed sources and creation of depth-selective images in a diffusion-wave field is reviewed.

  3. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Park, So Hyun; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations.

  4. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique

    International Nuclear Information System (INIS)

    Song, Min Seop; Park, So Hyun; Kim, Eung Soo

    2014-01-01

    Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations

  5. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  6. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  7. Generic Energy Matching Model and Figure of Matching Algorithm for Combined Renewable Energy Systems

    Directory of Open Access Journals (Sweden)

    J.C. Brezet

    2009-08-01

    Full Text Available In this paper the Energy Matching Model and Figure of Matching Algorithm which originally was dedicated only to photovoltaic (PV systems [1] are extended towards a Model and Algorithm suitable for combined systems which are a result of integration of two or more renewable energy sources into one. The systems under investigation will range from mobile portable devices up to the large renewable energy system conceivably to be applied at the Afsluitdijk (Closure- dike in the north of the Netherlands. This Afsluitdijk is the major dam in the Netherlands, damming off the Zuiderzee, a salt water inlet of the North Sea and turning it into the fresh water lake of the IJsselmeer. The energy chain of power supplies based on a combination of renewable energy sources can be modeled by using one generic Energy Matching Model as starting point.

  8. History Matching: Towards Geologically Reasonable Models

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Cordua, Knud Skou; Mosegaard, Klaus

    This work focuses on the development of a new method for history matching problem that through a deterministic search finds a geologically feasible solution. Complex geology is taken into account evaluating multiple point statistics from earth model prototypes - training images. Further a function...... that measures similarity between statistics of a training image and statistics of any smooth model is introduced and its analytical gradient is computed. This allows us to apply any gradientbased method to history matching problem and guide a solution until it satisfies both production data and complexity...

  9. High-efficiency resonant coupled wireless power transfer via tunable impedance matching

    Science.gov (United States)

    Anowar, Tanbir Ibne; Barman, Surajit Das; Wasif Reza, Ahmed; Kumar, Narendra

    2017-10-01

    For magnetic resonant coupled wireless power transfer (WPT), the axial movement of near-field coupled coils adversely degrades the power transfer efficiency (PTE) of the system and often creates sub-resonance. This paper presents a tunable impedance matching technique based on optimum coupling tuning to enhance the efficiency of resonant coupled WPT system. The optimum power transfer model is analysed from equivalent circuit model via reflected load principle, and the adequate matching are achieved through the optimum tuning of coupling coefficients at both the transmitting and receiving end of the system. Both simulations and experiments are performed to evaluate the theoretical model of the proposed matching technique, and results in a PTE over 80% at close coil proximity without shifting the original resonant frequency. Compared to the fixed coupled WPT, the extracted efficiency shows 15.1% and 19.9% improvements at the centre-to-centre misalignment of 10 and 70 cm, respectively. Applying this technique, the extracted S21 parameter shows more than 10 dB improvements at both strong and weak couplings. Through the developed model, the optimum coupling tuning also significantly improves the performance over matching techniques using frequency tracking and tunable matching circuits.

  10. Improving Image Matching by Reducing Surface Reflections Using Polarising Filter Techniques

    Science.gov (United States)

    Conen, N.; Hastedt, H.; Kahmen, O.; Luhmann, T.

    2018-05-01

    In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera's orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002) using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm) with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.

  11. IMPROVING IMAGE MATCHING BY REDUCING SURFACE REFLECTIONS USING POLARISING FILTER TECHNIQUES

    Directory of Open Access Journals (Sweden)

    N. Conen

    2018-05-01

    Full Text Available In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera’s orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002 using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.

  12. Fingerprint Matching by Thin-plate Spline Modelling of Elastic Deformations

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2003-01-01

    This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching

  13. Dynamic model reduction: An overview of available techniques with application to power systems

    Directory of Open Access Journals (Sweden)

    Đukić Savo D.

    2012-01-01

    Full Text Available This paper summarises the model reduction techniques used for the reduction of large-scale linear and nonlinear dynamic models, described by the differential and algebraic equations that are commonly used in control theory. The groups of methods discussed in this paper for reduction of the linear dynamic model are based on singular perturbation analysis, modal analysis, singular value decomposition, moment matching and methods based on a combination of singular value decomposition and moment matching. Among the nonlinear dynamic model reduction methods, proper orthogonal decomposition, the trajectory piecewise linear method, balancing-based methods, reduction by optimising system matrices and projection from a linearised model, are described. Part of the paper is devoted to the techniques commonly used for reduction (equivalencing of large-scale power systems, which are based on coherency, synchrony, singular perturbation analysis, modal analysis and identification. Two (most interesting of the described techniques are applied to the reduction of the commonly used New England 10-generator, 39-bus test power system.

  14. A new registration method with voxel-matching technique for temporal subtraction images

    Science.gov (United States)

    Itai, Yoshinori; Kim, Hyoungseop; Ishikawa, Seiji; Katsuragawa, Shigehiko; Doi, Kunio

    2008-03-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes on medical images by removing most of normal structures. One of the important problems in temporal subtraction is that subtraction images commonly include artifacts created by slight differences in the size, shape, and/or location of anatomical structures. In this paper, we developed a new registration method with voxel-matching technique for substantially removing the subtraction artifacts on the temporal subtraction image obtained from multiple-detector computed tomography (MDCT). With this technique, the voxel value in a warped (or non-warped) previous image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. Our new method was examined on 16 clinical cases with MDCT images. Preliminary results indicated that interval changes on the subtraction images were enhanced considerably, with a substantial reduction of misregistration artifacts. The temporal subtraction images obtained by use of the voxel-matching technique would be very useful for radiologists in the detection of interval changes on MDCT images.

  15. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  16. A technique to obtain a multiparameter radar rainfall algorithm using the probability matching procedure

    International Nuclear Information System (INIS)

    Gorgucci, E.; Scarchilli, G.

    1997-01-01

    The natural cumulative distributions of rainfall observed by a network of rain gauges and a multiparameter radar are matched to derive multiparameter radar algorithms for rainfall estimation. The use of multiparameter radar measurements in a statistical framework to estimate rainfall is resented in this paper, The techniques developed in this paper are applied to the radar and rain gauge measurement of rainfall observed in central Florida and central Italy. Conventional pointwise estimates of rainfall are also compared. The probability matching procedure, when applied to the radar and surface measurements, shows that multiparameter radar algorithms can match the probability distribution function better than the reflectivity-based algorithms. It is also shown that the multiparameter radar algorithm derived matching the cumulative distribution function of rainfall provides more accurate estimates of rainfall on the ground in comparison to any conventional reflectivity-based algorithm

  17. Anticipated growth and business cycles in matching models

    NARCIS (Netherlands)

    den Haan, W.J.; Kaltenbrunner, G.

    2009-01-01

    In a business cycle model that incorporates a standard matching framework, employment increases in response to news shocks, even though the wealth effect associated with the increase in expected productivity reduces labor force participation. The reason is that the matching friction induces

  18. Using crosswell data to enhance history matching

    KAUST Repository

    Ravanelli, Fabio M.

    2014-01-01

    One of the most challenging tasks in the oil industry is the production of reliable reservoir forecast models. Due to different sources of uncertainties in the numerical models and inputs, reservoir simulations are often only crude approximations of the reality. This problem is mitigated by conditioning the model with data through data assimilation, a process known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use of time-lapse data and advanced data assimilation techniques. One of the most promising data assimilation techniques employed in the industry is the ensemble Kalman filter (EnKF) because of its ability to deal with non-linear models at reasonable computational cost. In this paper we study the use of crosswell seismic data as an alternative to 4D seismic surveys in areas where it is not possible to re-shoot seismic. A synthetic reservoir model is used in a history matching study designed better estimate porosity and permeability distributions and improve the quality of the model to predict future field performance. This study is divided in three parts: First the use of production data only is evaluated (baseline for benchmark). Second the benefits of using production and 4D seismic data are assessed. Finally, a new conceptual idea is proposed to obtain time-lapse information for history matching. The use of crosswell time-lapse seismic tomography to map velocities in the interwell region is demonstrated as a potential tool to ensure survey reproducibility and low acquisition cost when compared with full scale surface surveys. Our numerical simulations show that the proposed method provides promising history matching results leading to similar estimation error reductions when compared with conventional history matched surface seismic data.

  19. Frequency doubling in poled polymers using anomalous dispersion phase-matching

    Energy Technology Data Exchange (ETDEWEB)

    Kowalczyk, T.C.; Singer, K.D. [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Physics; Cahill, P.A. [Sandia National Labs., Albuquerque, NM (United States)

    1995-10-01

    The authors report on a second harmonic generation in a poled polymer waveguide using anomalous dispersion phase-matching. Blue light ({lambda} = 407 nm) was produced by phase-matching the lowest order fundamental and harmonic modes over a distance of 32 {micro}m. The experimental conversion efficiency was {eta} = 1.2 {times} 10{sup {minus}4}, in agreement with theory. Additionally, they discuss a method of enhancing the conversion efficiency for second harmonic generation using anomalous dispersion phase-matching to optimize Cerenkov second harmonic generation. The modeling shows that a combination of phase-matching techniques creates larger conversion efficiencies and reduces critical fabrication requirements of the individual phase-matching techniques.

  20. eMatchSite: sequence order-independent structure alignments of ligand binding pockets in protein models.

    Directory of Open Access Journals (Sweden)

    Michal Brylinski

    2014-09-01

    Full Text Available Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4-9% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.

  1. Combining machine learning and matching techniques to improve causal inference in program evaluation.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    Program evaluations often utilize various matching approaches to emulate the randomization process for group assignment in experimental studies. Typically, the matching strategy is implemented, and then covariate balance is assessed before estimating treatment effects. This paper introduces a novel analytic framework utilizing a machine learning algorithm called optimal discriminant analysis (ODA) for assessing covariate balance and estimating treatment effects, once the matching strategy has been implemented. This framework holds several key advantages over the conventional approach: application to any variable metric and number of groups; insensitivity to skewed data or outliers; and use of accuracy measures applicable to all prognostic analyses. Moreover, ODA accepts analytic weights, thereby extending the methodology to any study design where weights are used for covariate adjustment or more precise (differential) outcome measurement. One-to-one matching on the propensity score was used as the matching strategy. Covariate balance was assessed using standardized difference in means (conventional approach) and measures of classification accuracy (ODA). Treatment effects were estimated using ordinary least squares regression and ODA. Using empirical data, ODA produced results highly consistent with those obtained via the conventional methodology for assessing covariate balance and estimating treatment effects. When ODA is combined with matching techniques within a treatment effects framework, the results are consistent with conventional approaches. However, given that it provides additional dimensions and robustness to the analysis versus what can currently be achieved using conventional approaches, ODA offers an appealing alternative. © 2016 John Wiley & Sons, Ltd.

  2. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Energy Technology Data Exchange (ETDEWEB)

    Stubberud, Peter A., E-mail: stubber@ee.unlv.edu [Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, Las Vegas, NV 89154 (United States); Stubberud, Stephen C., E-mail: scstubberud@ieee.org [Oakridge Technology, San Diego, CA 92121 (United States); Stubberud, Allen R., E-mail: stubberud@att.net [Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA 92697 (United States)

    2014-12-10

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design

  3. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    International Nuclear Information System (INIS)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-01-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design

  4. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Science.gov (United States)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-12-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT

  5. Marriage and Divorce in a Model of Matching

    OpenAIRE

    Mumcu, Ayse; Saglam, Ismail

    2006-01-01

    We study the problem of marriage formation and marital distribution in a two-period model of matching, extending the matching with bargaining framework of Crawford and Rochford (1986). We run simulations to find the effects of alimony rate, legal cost of divorce, initial endowments, couple and single productivity parameters on the payoffs and marital status in the society.

  6. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control...... system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effectiveness of the proposed method, a robot system subjected to failures is used to demonstrate...

  7. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  8. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  9. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  10. Radiographic evaluation of the quality of root canal obturation of single-matched cone Gutta-percha root canal filling versus hot lateral technique

    Directory of Open Access Journals (Sweden)

    Randa Suleiman Obeidat

    2014-01-01

    Full Text Available Aim: The aim of this study is to evaluate radiographically the quality of root canal filling in mesiodistal and buccolingual view when comparing matched cone condensation and warm lateral Gutta-percha condensation using system B heating instrument in a low-heat warm lateral condensation technique in0 vitro. Materials and Methods: A total of 40 mandibular premolars with straight single canals were divided into two groups with 20 each. The root canals were shaped by hand file and Revo-S rotary files to size (25, 0.06 at the end point, then they filled by Gutta-percha cone and meta-seal sealer. In group A, a single matched cone technique was used to fill the root canals. In group B, a hot lateral condensation using system B instrument at 101°C was performed. Result: The result of this study showed no significant difference in density of Gutta-percha fill in apical and coronal two-third when comparing matched cone root canal filling and hot lateral technique (P > 0.05. The only significant difference (P < 0.05 was in matched cone between buccolingual and mesiodistal view in the coronal two-third. Conclusion: Within the limitation of this study, single matched cone technique has a good density in the apical one-third as that of the hot lateral technique so it may be used for filling narrow canals. In the coronal two-third of the root canal, single matched cone technique showed inferior density of root canal filling which can be improved by using accessory cones Gutta-percha in wide canal.

  11. Anesthesia Technique and Mortality after Total Hip or Knee Arthroplasty: A Retrospective, Propensity Score-matched Cohort Study.

    Science.gov (United States)

    Perlas, Anahi; Chan, Vincent W S; Beattie, Scott

    2016-10-01

    This propensity score-matched cohort study evaluates the effect of anesthetic technique on a 30-day mortality after total hip or knee arthroplasty. All patients who had hip or knee arthroplasty between January 1, 2003, and December 31, 2014, were evaluated. The principal exposure was spinal versus general anesthesia. The primary outcome was 30-day mortality. Secondary outcomes were (1) perioperative myocardial infarction; (2) a composite of major adverse cardiac events that includes cardiac arrest, myocardial infarction, or newly diagnosed arrhythmia; (3) pulmonary embolism; (4) major blood loss; (5) hospital length of stay; and (6) operating room procedure time. A propensity score-matched-pair analysis was performed using a nonparsimonious logistic regression model of regional anesthetic use. We identified 10,868 patients, of whom 8,553 had spinal anesthesia and 2,315 had general anesthesia. Ninety-two percent (n = 2,135) of the patients who had general anesthesia were matched to similar patients who did not have general anesthesia. In the matched cohort, the 30-day mortality rate was 0.19% (n = 4) in the spinal anesthesia group and 0.8% (n = 17) in the general anesthesia group (risk ratio, 0.42; 95% CI, 0.21 to 0.83; P = 0.0045). Spinal anesthesia was also associated with a shorter hospital length of stay (5.7 vs. 6.6 days; P anesthesia and lower 30-day mortality, as well as a shorter hospital length of stay, after elective joint replacement surgery.

  12. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.; Datta-Gupta, A.; Ma, X.; Mallick, B.

    2009-01-01

    the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  13. Matching Index-of-Refraction for 3D Printing Model Using Mixture of Herb Essential Oil and Light Mineral Oil

    International Nuclear Information System (INIS)

    Song, Min Seop; Choi, Hae Yoon; Kim, Eung Soo

    2013-01-01

    This study has extensively investigated the emerging 3-D printing technologies for use of MIR-based flow field visualization methods such as PIV and LDV. As a result, mixture of Herb essential oil and light mineral oil has been evaluated to be great working fluid due to its adequate properties. Using this combination, the RIs between 1.45 and 1.55 can be accurately matched, and most of the transparent materials are found to be ranged in here. Conclusively, the proposed MIR method are expected to provide large flexibility of model materials and geometries for laser based optical measurements. Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV) are the two major optical technologies used for flow field visualization in the latest fundamental thermal-hydraulics researches. Those techniques seriously require minimizing optical distortions for enabling high quality data. Therefore, matching index of refraction (MIR) between model materials and working fluids are an essential part of minimizing measurement uncertainty. This paper proposes to use 3-D Printing technology for manufacturing models for the MIR-based optical measurements. Because of the large flexibility in geometries and materials of the 3-D Printing, its application is obviously expected to provide tremendous advantages over the traditional MIR-based optical measurements. This study focuses on the 3-D printing models and investigates their optical properties, transparent printing techniques, and index-matching fluids

  14. Matching Index-of-Refraction for 3D Printing Model Using Mixture of Herb Essential Oil and Light Mineral Oil

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2013-10-15

    This study has extensively investigated the emerging 3-D printing technologies for use of MIR-based flow field visualization methods such as PIV and LDV. As a result, mixture of Herb essential oil and light mineral oil has been evaluated to be great working fluid due to its adequate properties. Using this combination, the RIs between 1.45 and 1.55 can be accurately matched, and most of the transparent materials are found to be ranged in here. Conclusively, the proposed MIR method are expected to provide large flexibility of model materials and geometries for laser based optical measurements. Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV) are the two major optical technologies used for flow field visualization in the latest fundamental thermal-hydraulics researches. Those techniques seriously require minimizing optical distortions for enabling high quality data. Therefore, matching index of refraction (MIR) between model materials and working fluids are an essential part of minimizing measurement uncertainty. This paper proposes to use 3-D Printing technology for manufacturing models for the MIR-based optical measurements. Because of the large flexibility in geometries and materials of the 3-D Printing, its application is obviously expected to provide tremendous advantages over the traditional MIR-based optical measurements. This study focuses on the 3-D printing models and investigates their optical properties, transparent printing techniques, and index-matching fluids.

  15. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    International Nuclear Information System (INIS)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo

    2015-01-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches

  16. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo, E-mail: kes7741@snu.ac.kr

    2015-04-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches.

  17. Fast group matching for MR fingerprinting reconstruction.

    Science.gov (United States)

    Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L

    2015-08-01

    MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.

  18. Scientist Role Models in the Classroom: How Important Is Gender Matching?

    Science.gov (United States)

    Conner, Laura D. Carsten; Danielson, Jennifer

    2016-01-01

    Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used…

  19. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  20. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  1. On a special case of model matching

    Czech Academy of Sciences Publication Activity Database

    Zagalak, Petr

    2004-01-01

    Roč. 77, č. 2 (2004), s. 164-172 ISSN 0020-7179 R&D Projects: GA ČR GA102/01/0608 Institutional research plan: CEZ:AV0Z1075907 Keywords : linear systems * state feedback * model matching Subject RIV: BC - Control Systems Theory Impact factor: 0.702, year: 2004

  2. Fractured reservoir history matching improved based on artificial intelligent

    Directory of Open Access Journals (Sweden)

    Sayyed Hadi Riazi

    2016-12-01

    Full Text Available In this paper, a new robust approach based on Least Square Support Vector Machine (LSSVM as a proxy model is used for an automatic fractured reservoir history matching. The proxy model is made to model the history match objective function (mismatch values based on the history data of the field. This model is then used to minimize the objective function through Particle Swarm Optimization (PSO and Imperialist Competitive Algorithm (ICA. In automatic history matching, sensitive analysis is often performed on full simulation model. In this work, to get new range of the uncertain parameters (matching parameters in which the objective function has a minimum value, sensitivity analysis is also performed on the proxy model. By applying the modified ranges to the optimization methods, optimization of the objective function will be faster and outputs of the optimization methods (matching parameters are produced in less time and with high precision. This procedure leads to matching of history of the field in which a set of reservoir parameters is used. The final sets of parameters are then applied for the full simulation model to validate the technique. The obtained results show that the present procedure in this work is effective for history matching process due to its robust dependability and fast convergence speed. Due to high speed and need for small data sets, LSSVM is the best tool to build a proxy model. Also the comparison of PSO and ICA shows that PSO is less time-consuming and more effective.

  3. Automated image-matching technique for comparative diagnosis of the liver on CT examination

    International Nuclear Information System (INIS)

    Okumura, Eiichiro; Sanada, Shigeru; Suzuki, Masayuki; Tsushima, Yoshito; Matsui, Osamu

    2005-01-01

    When interpreting enhanced computer tomography (CT) images of the upper abdomen, radiologists visually select a set of images of the same anatomical positions from two or more CT image series (i.e., non-enhanced and contrast-enhanced CT images at arterial and delayed phase) to depict and to characterize any abnormalities. The same process is also necessary to create subtraction images by computer. We have developed an automated image selection system using a template-matching technique that allows the recognition of image sets at the same anatomical position from two CT image series. Using the template-matching technique, we compared several anatomical structures in each CT image at the same anatomical position. As the position of the liver may shift according to respiratory movement, not only the shape of the liver but also the gallbladder and other prominent structures included in the CT images were compared to allow appropriate selection of a set of CT images. This novel technique was applied in 11 upper abdominal CT examinations. In CT images with a slice thickness of 7.0 or 7.5 mm, the percentage of image sets selected correctly by the automated procedure was 86.6±15.3% per case. In CT images with a slice thickness of 1.25 mm, the percentages of correct selection of image sets by the automated procedure were 79.4±12.4% (non-enhanced and arterial-phase CT images) and 86.4±10.1% (arterial- and delayed-phase CT images). This automated method is useful for assisting in interpreting CT images and in creating digital subtraction images. (author)

  4. Ontology-based composition and matching for dynamic service coordination

    OpenAIRE

    Pahl, Claus; Gacitua-Decar, Veronica; Wang, MingXue; Yapa Bandara, Kosala

    2011-01-01

    Service engineering needs to address integration problems allowing services to collaborate and coordinate. The need to address dynamic automated changes - caused by on-demand environments and changing requirements - can be addressed through service coordination based on ontology-based composition and matching techniques. Our solution to composition and matching utilises a service coordination space that acts as a passive infrastructure for collaboration. We discuss the information models an...

  5. An investigation of matched index of refraction technique and its application in optical measurements of fluid flow

    Science.gov (United States)

    Amini, Noushin; Hassan, Yassin A.

    2012-12-01

    Optical distortions caused by non-uniformities of the refractive index within the measurement volume is a major impediment for all laser diagnostic imaging techniques applied in experimental fluid dynamic studies. Matching the refractive indices of the working fluid and the test section walls and interfaces provides an effective solution to this problem. The experimental set-ups designed to be used along with laser imaging techniques are typically constructed of transparent solid materials. In this investigation, different types of aqueous salt solutions and various organic fluids are studied for refractive index matching with acrylic and fused quartz, which are commonly used in construction of the test sections. One aqueous CaCl2·2H2O solution (63 % by weight) and two organic fluids, Dibutyl Phthalate and P-Cymene, are suggested for refractive index matching with fused quartz and acrylic, respectively. Moreover, the temperature dependence of the refractive indices of these fluids is investigated, and the Thermooptic Constant is calculated for each fluid. Finally, the fluid viscosity for different shear rates is measured as a function of temperature and is applied to characterize the physical behavior of the proposed fluids.

  6. A coupled piezoelectric–electromagnetic energy harvesting technique for achieving increased power output through damping matching

    International Nuclear Information System (INIS)

    Challa, Vinod R; Prasad, M G; Fisher, Frank T

    2009-01-01

    Vibration energy harvesting is being pursued as a means to power wireless sensors and ultra-low power autonomous devices. From a design standpoint, matching the electrical damping induced by the energy harvesting mechanism to the mechanical damping in the system is necessary for maximum efficiency. In this work two independent energy harvesting techniques are coupled to provide higher electrical damping within the system. Here the coupled energy harvesting device consists of a primary piezoelectric energy harvesting device to which an electromagnetic component is added to better match the total electrical damping to the mechanical damping in the system. The first coupled device has a resonance frequency of 21.6 Hz and generates a peak power output of ∼332 µW, compared to 257 and 244 µW obtained from the optimized, stand-alone piezoelectric and electromagnetic energy harvesting devices, respectively, resulting in a 30% increase in power output. A theoretical model has been developed which closely agrees with the experimental results. A second coupled device, which utilizes the d 33 piezoelectric mode, shows a 65% increase in power output in comparison to the corresponding stand-alone, single harvesting mode devices. This work illustrates the design considerations and limitations that one must consider to enhance device performance through the coupling of multiple harvesting mechanisms within a single energy harvesting device

  7. Analysis of terrain map matching using multisensing techniques for applications to autonomous vehicle navigation

    Science.gov (United States)

    Page, Lance; Shen, C. N.

    1991-01-01

    This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.

  8. A New Model for a Carpool Matching Service.

    Directory of Open Access Journals (Sweden)

    Jizhe Xia

    Full Text Available Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China to demonstrate how carpool teams can be determined.

  9. PLANE MATCHING WITH OBJECT-SPACE SEARCHING USING INDEPENDENTLY RECTIFIED IMAGES

    Directory of Open Access Journals (Sweden)

    H. Takeda

    2012-07-01

    Full Text Available In recent years, the social situation in cities has changed significantly such as redevelopment due to the massive earthquake and large-scale urban development. For example, numerical simulations can be used to study this phenomenon. Such simulations require the construction of high-definition three-dimensional city models that accurately reflect the real world. Progress in sensor technology allows us to easily obtain multi-view images. However, the existing multi-image matching techniques are inadequate. In this paper, we propose a new technique for multi-image matching. Since the existing method of feature searching is complicated, we have developed a rectification method that can be processed independently for each image does not depend on the stereo-pair. The object-space searching method that produces mismatches due to the occlusion or distortion of wall textures on images is the focus of our study. Our proposed technique can also match the building wall surface. The proposed technique has several advantages, and its usefulness is clarified through an experiment using actual images.

  10. Semantic Data Matching: Principles and Performance

    Science.gov (United States)

    Deaton, Russell; Doan, Thao; Schweiger, Tom

    Automated and real-time management of customer relationships requires robust and intelligent data matching across widespread and diverse data sources. Simple string matching algorithms, such as dynamic programming, can handle typographical errors in the data, but are less able to match records that require contextual and experiential knowledge. Latent Semantic Indexing (LSI) (Berry et al. ; Deerwester et al. is a machine intelligence technique that can match data based upon higher order structure, and is able to handle difficult problems, such as words that have different meanings but the same spelling, are synonymous, or have multiple meanings. Essentially, the technique matches records based upon context, or mathematically quantifying when terms occur in the same record.

  11. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  12. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    Science.gov (United States)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  13. Wages, Training, and Job Turnover in a Search-Matching Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Nielsen, Michael Svarer

    1999-01-01

    In this paper we extend a job search-matching model with firm-specific investments in training developed by Mortensen (1998) to allow for different offer arrival rates in employment and unemployment. The model by Mortensen changes the original wage posting model (Burdett and Mortensen, 1998) in two...

  14. Uniform stable conformal convolutional perfectly matched layer for enlarged cell technique conformal finite-difference time-domain method

    International Nuclear Information System (INIS)

    Wang Yue; Wang Jian-Guo; Chen Zai-Gao

    2015-01-01

    Based on conformal construction of physical model in a three-dimensional Cartesian grid, an integral-based conformal convolutional perfectly matched layer (CPML) is given for solving the truncation problem of the open port when the enlarged cell technique conformal finite-difference time-domain (ECT-CFDTD) method is used to simulate the wave propagation inside a perfect electric conductor (PEC) waveguide. The algorithm has the same numerical stability as the ECT-CFDTD method. For the long-time propagation problems of an evanescent wave in a waveguide, several numerical simulations are performed to analyze the reflection error by sweeping the constitutive parameters of the integral-based conformal CPML. Our numerical results show that the integral-based conformal CPML can be used to efficiently truncate the open port of the waveguide. (paper)

  15. Matching by Monotonic Tone Mapping.

    Science.gov (United States)

    Kovacs, Gyorgy

    2018-06-01

    In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.

  16. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  17. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  18. Analytical modelling of waveguide mode launchers for matched feed reflector systems

    DEFF Research Database (Denmark)

    Palvig, Michael Forum; Breinbjerg, Olav; Meincke, Peter

    2016-01-01

    Matched feed horns aim to cancel cross polarization generated in offset reflector systems. An analytical method for predicting the mode spectrum generated by inclusions in such horns, e.g. stubs and pins, is presented. The theory is based on the reciprocity theorem with the inclusions represented...... by current sources. The model is supported by Method of Moments calculations in GRASP and very good agreement is seen. The model gives rise to many interesting observations and ideas for new or improved mode launchers for matched feeds.......Matched feed horns aim to cancel cross polarization generated in offset reflector systems. An analytical method for predicting the mode spectrum generated by inclusions in such horns, e.g. stubs and pins, is presented. The theory is based on the reciprocity theorem with the inclusions represented...

  19. Adiabatic perturbations in pre-big bang models: Matching conditions and scale invariance

    International Nuclear Information System (INIS)

    Durrer, Ruth; Vernizzi, Filippo

    2002-01-01

    At low energy, the four-dimensional effective action of the ekpyrotic model of the universe is equivalent to a slightly modified version of the pre-big bang model. We discuss cosmological perturbations in these models. In particular we address the issue of matching the perturbations from a collapsing to an expanding phase. We show that, under certain physically motivated and quite generic assumptions on the high energy corrections, one obtains n=0 for the spectrum of scalar perturbations in the original pre-big bang model (with a vanishing potential). With the same assumptions, when an exponential potential for the dilaton is included, a scale invariant spectrum (n=1) of adiabatic scalar perturbations is produced under very generic matching conditions, both in a modified pre-big bang and ekpyrotic scenario. We also derive the resulting spectrum for arbitrary power law scale factors matched to a radiation-dominated era

  20. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  1. An algebraic method to develop well-posed PML models Absorbing layers, perfectly matched layers, linearized Euler equations

    International Nuclear Information System (INIS)

    Rahmouni, Adib N.

    2004-01-01

    In 1994, Berenger [Journal of Computational Physics 114 (1994) 185] proposed a new layer method: perfectly matched layer, PML, for electromagnetism. This new method is based on the truncation of the computational domain by a layer which absorbs waves regardless of their frequency and angle of incidence. Unfortunately, the technique proposed by Berenger (loc. cit.) leads to a system which has lost the most important properties of the original one: strong hyperbolicity and symmetry. We present in this paper an algebraic technique leading to well-known PML model [IEEE Transactions on Antennas and Propagation 44 (1996) 1630] for the linearized Euler equations, strongly well-posed, preserving the advantages of the initial method, and retaining symmetry. The technique proposed in this paper can be extended to various hyperbolic problems

  2. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  3. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models...

  4. Cross-species genomics matches driver mutations and cell compartments to model ependymoma

    Science.gov (United States)

    Johnson, Robert A.; Wright, Karen D.; Poppleton, Helen; Mohankumar, Kumarasamypet M.; Finkelstein, David; Pounds, Stanley B.; Rand, Vikki; Leary, Sarah E.S.; White, Elsie; Eden, Christopher; Hogg, Twala; Northcott, Paul; Mack, Stephen; Neale, Geoffrey; Wang, Yong-Dong; Coyle, Beth; Atkinson, Jennifer; DeWire, Mariko; Kranenburg, Tanya A.; Gillespie, Yancey; Allen, Jeffrey C.; Merchant, Thomas; Boop, Fredrick A.; Sanford, Robert. A.; Gajjar, Amar; Ellison, David W.; Taylor, Michael D.; Grundy, Richard G.; Gilbertson, Richard J.

    2010-01-01

    Understanding the biology that underlies histologically similar but molecularly distinct subgroups of cancer has proven difficult since their defining genetic alterations are often numerous, and the cellular origins of most cancers remain unknown1–3. We sought to decipher this heterogeneity by integrating matched genetic alterations and candidate cells of origin to generate accurate disease models. First, we identified subgroups of human ependymoma, a form of neural tumor that arises throughout the central nervous system (CNS). Subgroup specific alterations included amplifications and homozygous deletions of genes not yet implicated in ependymoma. To select cellular compartments most likely to give rise to subgroups of ependymoma, we matched the transcriptomes of human tumors to those of mouse neural stem cells (NSCs), isolated from different regions of the CNS at different developmental stages, with an intact or deleted Ink4a/Arf locus. The transcriptome of human cerebral ependymomas with amplified EPHB2 and deleted INK4A/ARF matched only that of embryonic cerebral Ink4a/Arf−/− NSCs. Remarkably, activation of Ephb2 signaling in these, but not other NSCs, generated the first mouse model of ependymoma, which is highly penetrant and accurately models the histology and transcriptome of one subgroup of human cerebral tumor. Further comparative analysis of matched mouse and human tumors revealed selective deregulation in the expression and copy number of genes that control synaptogenesis, pinpointing disruption of this pathway as a critical event in the production of this ependymoma subgroup. Our data demonstrate the power of cross-species genomics to meticulously match subgroup specific driver mutations with cellular compartments to model and interrogate cancer subgroups. PMID:20639864

  5. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    Science.gov (United States)

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  6. A match-mismatch test of a stage model of behaviour change in tobacco smoking

    NARCIS (Netherlands)

    Dijkstra, A; Conijn, B; De Vries, H

    Aims An innovation offered by stage models of behaviour change is that of stage-matched interventions. Match-mismatch studies are the primary test of this idea but also the primary test of the validity of stage models. This study aimed at conducting such a test among tobacco smokers using the Social

  7. Multicollinearity in associations between multiple environmental features and body weight and abdominal fat: using matching techniques to assess whether the associations are separable.

    Science.gov (United States)

    Leal, Cinira; Bean, Kathy; Thomas, Frédérique; Chaix, Basile

    2012-06-01

    Because of the strong correlations among neighborhoods' characteristics, it is not clear whether the associations of specific environmental exposures (e.g., densities of physical features and services) with obesity can be disentangled. Using data from the RECORD (Residential Environment and Coronary Heart Disease) Cohort Study (Paris, France, 2007-2008), the authors investigated whether neighborhood characteristics related to the sociodemographic, physical, service-related, and social-interactional environments were associated with body mass index and waist circumference. The authors developed an original neighborhood characteristic-matching technique (analyses within pairs of participants similarly exposed to an environmental variable) to assess whether or not these associations could be disentangled. After adjustment for individual/neighborhood socioeconomic variables, body mass index/waist circumference was negatively associated with characteristics of the physical/service environments reflecting higher densities (e.g., proportion of built surface, densities of shops selling fruits/vegetables, and restaurants). Multiple adjustment models and the neighborhood characteristic-matching technique were unable to identify which of these neighborhood variables were driving the associations because of high correlations between the environmental variables. Overall, beyond the socioeconomic environment, the physical and service environments may be associated with weight status, but it is difficult to disentangle the effects of strongly correlated environmental dimensions, even if they imply different causal mechanisms and interventions.

  8. Chaotic Planning Solutions in the Textbook Model of Labor Market Search and Matching

    NARCIS (Netherlands)

    Bhattacharya, J.; Bunzel, H.

    2003-01-01

    This paper demonstrates that cyclical and chaotic planning solutions are possible in the standard textbook model of search and matching in labor markets. More specifically, it takes a discretetime adaptation of the continuous-time matching economy described in Pissarides (1990, 2001), and computes

  9. A comparative study between matched and mis-matched projection/back projection pairs used with ASIRT reconstruction method

    International Nuclear Information System (INIS)

    Guedouar, R.; Zarrad, B.

    2010-01-01

    For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.

  10. A comparative study between matched and mis-matched projection/back projection pairs used with ASIRT reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Guedouar, R., E-mail: raja_guedouar@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia); Zarrad, B., E-mail: boubakerzarrad@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia)

    2010-07-21

    For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.

  11. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  12. Is There a Purchase Limit on Regional Growth? A Quasi-experimental Evaluation of Investment Grants Using Matching Techniques

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel; Paloyo, Alfredo R.; Alecke, Björn

    2015-01-01

    In this article, we apply recent advances in quasi-experimental estimation methods to analyze the effectiveness of Germany’s large-scale regional policy instrument, the joint Federal Government/State Programme “Gemeinschaftsaufgabe Verbesserung der regionalen Wirtschaftsstruktur” (GRW), which is ...... of matching techniques in regional data settings. Overall, however, the matching approach can still be considered of great value for regional policy analysis and should be the subject of future research efforts in the field of empirical regional science.......In this article, we apply recent advances in quasi-experimental estimation methods to analyze the effectiveness of Germany’s large-scale regional policy instrument, the joint Federal Government/State Programme “Gemeinschaftsaufgabe Verbesserung der regionalen Wirtschaftsstruktur” (GRW), which...... is a means to foster labor-productivity growth in lagging regions. In particular, adopting binary and generalized propensity-score matching methods, our results indicate that the GRW can be generally considered effective. However, we find evidence for a nonlinear relationship between GRW funding and regional...

  13. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  14. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    Energy Technology Data Exchange (ETDEWEB)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.; Czekala, Ian; Bailey, Vanessa P.; Follette, Katherine B. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA, 94305 (United States); Wang, Jason J.; Rosa, Robert J. De; Duchêne, Gaspard [Astronomy Department, University of California, Berkeley CA, 94720 (United States); Pueyo, Laurent [Space Telescope Science Institute, Baltimore, MD, 21218 (United States); Marley, Mark S. [NASA Ames Research Center, Mountain View, CA, 94035 (United States); Arriaga, Pauline; Fitzgerald, Michael P. [Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095 (United States); Barman, Travis [Lunar and Planetary Laboratory, University of Arizona, Tucson AZ, 85721 (United States); Bulger, Joanna [Subaru Telescope, NAOJ, 650 North A’ohoku Place, Hilo, HI 96720 (United States); Chilcote, Jeffrey [Dunlap Institute for Astronomy and Astrophysics, University of Toronto, Toronto, ON, M5S 3H4 (Canada); Cotten, Tara [Department of Physics and Astronomy, University of Georgia, Athens, GA, 30602 (United States); Doyon, Rene [Institut de Recherche sur les Exoplanètes, Départment de Physique, Université de Montréal, Montréal QC, H3C 3J7 (Canada); Gerard, Benjamin L. [University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 5C2 (Canada); Goodsell, Stephen J., E-mail: jruffio@stanford.edu [Gemini Observatory, 670 N. A’ohoku Place, Hilo, HI, 96720 (United States); and others

    2017-06-10

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  15. Business models for open innovation: Matching heterogeneous open innovation strategies with business model dimensions

    OpenAIRE

    Saebi, Tina; Foss, Nicolai Juul

    2015-01-01

    -This is the author's version of the article:"Business models for open innovation: Matching heterogeneous open innovation strategies with business model dimensions", European Management Journal, Volume 33, Issue 3, June 2015, Pages 201–213 Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies' business models are not attuned to open strategies. Ac...

  16. Translation Techniques

    OpenAIRE

    Marcia Pinheiro

    2015-01-01

    In this paper, we discuss three translation techniques: literal, cultural, and artistic. Literal translation is a well-known technique, which means that it is quite easy to find sources on the topic. Cultural and artistic translation may be new terms. Whilst cultural translation focuses on matching contexts, artistic translation focuses on matching reactions. Because literal translation matches only words, it is not hard to find situations in which we should not use this technique.  Because a...

  17. Parikh Matching in the Streaming Model

    DEFF Research Database (Denmark)

    Lee, Lap-Kei; Lewenstein, Moshe; Zhang, Qin

    2012-01-01

    Let S be a string over an alphabet Σ = {σ1, σ2, …}. A Parikh-mapping maps a substring S′ of S to a |Σ|-length vector that contains, in location i of the vector, the count of σi in S′. Parikh matching refers to the problem of finding all substrings of a text T which match to a given input |Σ|-leng...

  18. Crystallographic study of grain refinement in aluminum alloys using the edge-to-edge matching model

    International Nuclear Information System (INIS)

    Zhang, M.-X.; Kelly, P.M.; Easton, M.A.; Taylor, J.A.

    2005-01-01

    The edge-to-edge matching model for describing the interfacial crystallographic characteristics between two phases that are related by reproducible orientation relationships has been applied to the typical grain refiners in aluminum alloys. Excellent atomic matching between Al 3 Ti nucleating substrates, known to be effective nucleation sites for primary Al, and the Al matrix in both close packed directions and close packed planes containing these directions have been identified. The crystallographic features of the grain refiner and the Al matrix are very consistent with the edge-to-edge matching model. For three other typical grain refiners for Al alloys, TiC (when a = 0.4328 nm), TiB 2 and AlB 2 , the matching only occurs between the close packed directions in both phases and between the second close packed plane of the Al matrix and the second close packed plane of the refiners. According to the model, it is predicted that Al 3 Ti is a more powerful nucleating substrate for Al alloy than TiC, TiB 2 and AlB 2 . This agrees with the previous experimental results. The present work shows that the edge-to-edge matching model has the potential to be a powerful tool in discovering new and more powerful grain refiners for Al alloys

  19. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    Science.gov (United States)

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  20. Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering

    Science.gov (United States)

    Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.

    2018-04-01

    Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.

  1. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  2. Improvement of temporal and dynamic subtraction images on abdominal CT using 3D global image matching and nonlinear image warping techniques

    International Nuclear Information System (INIS)

    Okumura, E; Sanada, S; Suzuki, M; Takemura, A; Matsui, O

    2007-01-01

    Accurate registration of the corresponding non-enhanced and arterial-phase CT images is necessary to create temporal and dynamic subtraction images for the enhancement of subtle abnormalities. However, respiratory movement causes misregistration at the periphery of the liver. To reduce these misregistration errors, we developed a temporal and dynamic subtraction technique to enhance small HCC by 3D global matching and nonlinear image warping techniques. The study population consisted of 21 patients with HCC. Using the 3D global matching and nonlinear image warping technique, we registered current and previous arterial-phase CT images or current non-enhanced and arterial-phase CT images obtained in the same position. The temporal subtraction image was obtained by subtracting the previous arterial-phase CT image from the warped current arterial-phase CT image. The dynamic subtraction image was obtained by the subtraction of the current non-enhanced CT image from the warped current arterial-phase CT image. The percentage of fair or superior temporal subtraction images increased from 52.4% to 95.2% using the new technique, while on the dynamic subtraction images, the percentage increased from 66.6% to 95.2%. The new subtraction technique may facilitate the diagnosis of subtle HCC based on the superior ability of these subtraction images to show nodular and/or ring enhancement

  3. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  4. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League

    NARCIS (Netherlands)

    Koopman, S.J.; Lit, R.

    2015-01-01

    Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results

  5. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  6. Data Matching Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection

    CERN Document Server

    Christen, Peter

    2012-01-01

    Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of da

  7. Equilibrium Price Dispersion in a Matching Model with Divisible Money

    NARCIS (Netherlands)

    Kamiya, K.; Sato, T.

    2002-01-01

    The main purpose of this paper is to show that, for any given parameter values, an equilibrium with dispersed prices (two-price equilibrium) exists in a simple matching model with divisible money presented by Green and Zhou (1998).We also show that our two-price equilibrium is unique in certain

  8. Anomalous dispersion enhanced Cerenkov phase-matching

    Energy Technology Data Exchange (ETDEWEB)

    Kowalczyk, T.C.; Singer, K.D. [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Physics; Cahill, P.A. [Sandia National Labs., Albuquerque, NM (United States)

    1993-11-01

    The authors report on a scheme for phase-matching second harmonic generation in polymer waveguides based on the use of anomalous dispersion to optimize Cerenkov phase matching. They have used the theoretical results of Hashizume et al. and Onda and Ito to design an optimum structure for phase-matched conversion. They have found that the use of anomalous dispersion in the design results in a 100-fold enhancement in the calculated conversion efficiency. This technique also overcomes the limitation of anomalous dispersion phase-matching which results from absorption at the second harmonic. Experiments are in progress to demonstrate these results.

  9. An accelerated image matching technique for UAV orthoimage registration

    Science.gov (United States)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  10. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  11. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  12. Real-time eSports Match Result Prediction

    OpenAIRE

    Yang, Yifan; Qin, Tian; Lei, Yu-Heng

    2016-01-01

    In this paper, we try to predict the winning team of a match in the multiplayer eSports game Dota 2. To address the weaknesses of previous work, we consider more aspects of prior (pre-match) features from individual players' match history, as well as real-time (during-match) features at each minute as the match progresses. We use logistic regression, the proposed Attribute Sequence Model, and their combinations as the prediction models. In a dataset of 78362 matches where 20631 matches contai...

  13. Image Relaxation Matching Based on Feature Points for DSM Generation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shunyi; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    In photogrammetry and remote sensing, image matching is a basic and crucial process for automatic DEM generation. In this paper we presented a image relaxation matching method based on feature points. This method can be considered as an extention of regular grid point based matching. It avoids the shortcome of grid point based matching. For example, with this method, we can avoid low or even no texture area where errors frequently appear in cross correlaton matching. In the mean while, it makes full use of some mature techniques such as probability relaxation, image pyramid and the like which have already been successfully used in grid point matching process. Application of the technique to DEM generaton in different regions proved that it is more reasonable and reliable.

  14. Tuning the climate sensitivity of a global model to match 20th Century warming

    Science.gov (United States)

    Mauritsen, T.; Roeckner, E.

    2015-12-01

    A climate models ability to reproduce observed historical warming is sometimes viewed as a measure of quality. Yet, for practical reasons historical warming cannot be considered a purely empirical result of the modelling efforts because the desired result is known in advance and so is a potential target of tuning. Here we explain how the latest edition of the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1.2) atmospheric model (ECHAM6.3) had its climate sensitivity systematically tuned to about 3 K; the MPI model to be used during CMIP6. This was deliberately done in order to improve the match to observed 20th Century warming over the previous model generation (MPI-ESM, ECHAM6.1) which warmed too much and had a sensitivity of 3.5 K. In the process we identified several controls on model cloud feedback that confirm recently proposed hypotheses concerning trade-wind cumulus and high-latitude mixed-phase clouds. We then evaluate the model fidelity with centennial global warming and discuss the relative importance of climate sensitivity, forcing and ocean heat uptake efficiency in determining the response as well as possible systematic biases. The activity of targeting historical warming during model development is polarizing the modeling community with 35 percent of modelers stating that 20th Century warming was rated very important to decisive, whereas 30 percent would not consider it at all. Likewise, opinions diverge as to which measures are legitimate means for improving the model match to observed warming. These results are from a survey conducted in conjunction with the first WCRP Workshop on Model Tuning in fall 2014 answered by 23 modelers. We argue that tuning or constructing models to match observed warming to some extent is practically unavoidable, and as such, in many cases might as well be done explicitly. For modeling groups that have the capability to tune both their aerosol forcing and climate sensitivity there is now a unique

  15. Research on vehicles and cargos matching model based on virtual logistics platform

    Science.gov (United States)

    Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan

    2018-04-01

    Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.

  16. Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.

    Science.gov (United States)

    Salama, Mhd Suhyb; Su, Zhongbo

    2010-01-01

    A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.

  17. Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors

    Directory of Open Access Journals (Sweden)

    Mhd. Suhyb Salama

    2010-08-01

    Full Text Available A Bayesian model is developed to match aerospace ocean color observation tofield measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.

  18. [Application of an improved model of a job-matching platform for nurses].

    Science.gov (United States)

    Huang, Way-Ren; Lin, Chiou-Fen

    2015-04-01

    The three-month attrition rate for new nurses in Taiwan remains high. Many hospitals rely on traditional recruitment methods to find new nurses, yet it appears that their efficacy is less than ideal. To effectively solve this manpower shortage, a nursing resource platform is a project worth developing in the future. This study aimed to utilize a quality-improvement model to establish communication between hospitals and nursing students and create a customized employee-employer information-matching platform to help nursing students enter the workforce. This study was structured around a quality-improvement model and used current situation analysis, literature review, focus-group discussions, and process re-engineering to formulate necessary content for a job-matching platform for nursing. The concept of an academia-industry strategic alliance helped connect supply and demand within the same supply chain. The nurse job-matching platform created in this study provided job flexibility as well as job suitability assessments and continued follow-up and services for nurses after entering the workforce to provide more accurate matching of employers and employees. The academia-industry strategic alliance, job suitability, and long-term follow-up designed in this study are all new features in Taiwan's human resource service systems. The proposed human resource process re-engineering provides nursing students facing graduation with a professionally managed human resources platform. Allowing students to find an appropriate job prior to graduation will improve willingness to work and employee retention.

  19. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  20. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  1. Pattern recognition and string matching

    CERN Document Server

    Cheng, Xiuzhen

    2002-01-01

    The research and development of pattern recognition have proven to be of importance in science, technology, and human activity. Many useful concepts and tools from different disciplines have been employed in pattern recognition. Among them is string matching, which receives much theoretical and practical attention. String matching is also an important topic in combinatorial optimization. This book is devoted to recent advances in pattern recognition and string matching. It consists of twenty eight chapters written by different authors, addressing a broad range of topics such as those from classifica­ tion, matching, mining, feature selection, and applications. Each chapter is self-contained, and presents either novel methodological approaches or applications of existing theories and techniques. The aim, intent, and motivation for publishing this book is to pro­ vide a reference tool for the increasing number of readers who depend upon pattern recognition or string matching in some way. This includes student...

  2. Within-Cluster and Across-Cluster Matching with Observational Multilevel Data

    Science.gov (United States)

    Kim, Jee-Seon; Steiner, Peter M.; Hall, Courtney; Thoemmes, Felix

    2013-01-01

    When randomized experiments cannot be conducted in practice, propensity score (PS) techniques for matching treated and control units are frequently used for estimating causal treatment effects from observational data. Despite the popularity of PS techniques, they are not yet well studied for matching multilevel data where selection into treatment…

  3. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  4. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  5. Posterior Probability Matching and Human Perceptual Decision Making.

    Directory of Open Access Journals (Sweden)

    Richard F Murray

    2015-06-01

    Full Text Available Probability matching is a classic theory of decision making that was first developed in models of cognition. Posterior probability matching, a variant in which observers match their response probabilities to the posterior probability of each response being correct, is being used increasingly often in models of perception. However, little is known about whether posterior probability matching is consistent with the vast literature on vision and hearing that has developed within signal detection theory. Here we test posterior probability matching models using two tools from detection theory. First, we examine the models' performance in a two-pass experiment, where each block of trials is presented twice, and we measure the proportion of times that the model gives the same response twice to repeated stimuli. We show that at low performance levels, posterior probability matching models give highly inconsistent responses across repeated presentations of identical trials. We find that practised human observers are more consistent across repeated trials than these models predict, and we find some evidence that less practised observers more consistent as well. Second, we compare the performance of posterior probability matching models on a discrimination task to the performance of a theoretical ideal observer that achieves the best possible performance. We find that posterior probability matching is very inefficient at low-to-moderate performance levels, and that human observers can be more efficient than is ever possible according to posterior probability matching models. These findings support classic signal detection models, and rule out a broad class of posterior probability matching models for expert performance on perceptual tasks that range in complexity from contrast discrimination to symmetry detection. However, our findings leave open the possibility that inexperienced observers may show posterior probability matching behaviour, and our methods

  6. Cauchy-perturbative matching reexamined: Tests in spherical symmetry

    International Nuclear Information System (INIS)

    Zink, Burkhard; Pazos, Enrique; Diener, Peter; Tiglio, Manuel

    2006-01-01

    During the last few years progress has been made on several fronts making it possible to revisit Cauchy-perturbative matching (CPM) in numerical relativity in a more robust and accurate way. This paper is the first in a series where we plan to analyze CPM in the light of these new results. One of the new developments is an understanding of how to impose constraint-preserving boundary conditions (CPBC); though most of the related research has been driven by outer boundaries, one can use them for matching interface boundaries as well. Another front is related to numerically stable evolutions using multiple patches, which in the context of CPM allows the matching to be performed on a spherical surface, thus avoiding interpolations between Cartesian and spherical grids. One way of achieving stability for such schemes of arbitrary high order is through the use of penalty techniques and discrete derivatives satisfying summation by parts (SBP). Recently, new, very efficient and high-order accurate derivatives satisfying SBP and associated dissipation operators have been constructed. Here we start by testing all these techniques applied to CPM in a setting that is simple enough to study all the ingredients in great detail: Einstein's equations in spherical symmetry, describing a black hole coupled to a massless scalar field. We show that with the techniques described above, the errors introduced by Cauchy-perturbative matching are very small, and that very long-term and accurate CPM evolutions can be achieved. Our tests include the accretion and ring-down phase of a Schwarzschild black hole with CPM, where we find that the discrete evolution introduces, with a low spatial resolution of Δr=M/10, an error of 0.3% after an evolution time of 1,000,000M. For a black hole of solar mass, this corresponds to approximately 5s, and is therefore at the lower end of timescales discussed e.g. in the collapsar model of gamma-ray burst engines

  7. Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases.

    Science.gov (United States)

    Kelley, Brendan P; Klochko, Chad; Halabi, Safwan; Siegal, Daniel

    2016-06-01

    Retrospective data mining has tremendous potential in research but is time and labor intensive. Current data mining software contains many advanced search features but is limited in its ability to identify patients who meet multiple complex independent search criteria. Simple keyword and Boolean search techniques are ineffective when more complex searches are required, or when a search for multiple mutually inclusive variables becomes important. This is particularly true when trying to identify patients with a set of specific radiologic findings or proximity in time across multiple different imaging modalities. Another challenge that arises in retrospective data mining is that much variation still exists in how image findings are described in radiology reports. We present an algorithmic approach to solve this problem and describe a specific use case scenario in which we applied our technique to a real-world data set in order to identify patients who matched several independent variables in our institution's picture archiving and communication systems (PACS) database.

  8. MODELING CONTROLLED ASYNCHRONOUS ELECTRIC DRIVES WITH MATCHING REDUCERS AND TRANSFORMERS

    Directory of Open Access Journals (Sweden)

    V. S. Petrushin

    2015-04-01

    Full Text Available Purpose. Working out of mathematical models of the speed-controlled induction electric drives ensuring joint consideration of transformers, motors and loadings, and also matching reducers and transformers, both in static, and in dynamic regimes for the analysis of their operating characteristics. Methodology. At mathematical modelling are considered functional, mass, dimensional and cost indexes of reducers and transformers that allows observing engineering and economic aspects of speed-controlled induction electric drives. The mathematical models used for examination of the transitive electromagnetic and electromechanical processes, are grounded on systems of nonlinear differential equations with nonlinear coefficients (parameters of equivalent circuits of motors, varying in each operating point, including owing to appearances of saturation of magnetic system and current displacement in a winding of a rotor of an induction motor. For the purpose of raise of level of adequacy of models a magnetic circuit iron, additional and mechanical losses are considered. Results. Modelling of the several speed-controlled induction electric drives, different by components, but working on a loading equal on character, magnitude and a demanded control range is executed. At use of characteristic families including mechanical, at various parameters of regulating on which performances of the load mechanism are superimposed, the adjusting characteristics representing dependences of a modification of electrical, energy and thermal magnitudes from an angular speed of motors are gained. Originality. The offered complex models of speed-controlled induction electric drives with matching reducers and transformers, give the chance to realize well-founded sampling of components of drives. They also can be used as the design models by working out of speed-controlled induction motors. Practical value. Operating characteristics of various speed-controlled induction electric

  9. A high-resolution processing technique for improving the energy of weak signal based on matching pursuit

    Directory of Open Access Journals (Sweden)

    Shuyan Wang

    2016-05-01

    Full Text Available This paper proposes a new method to improve the resolution of the seismic signal and to compensate the energy of weak seismic signal based on matching pursuit. With a dictionary of Morlet wavelets, matching pursuit algorithm can decompose a seismic trace into a series of wavelets. We abstract complex-trace attributes from analytical expressions to shrink the search range of amplitude, frequency and phase. In addition, considering the level of correlation between constituent wavelets and average wavelet abstracted from well-seismic calibration, we can obtain the search range of scale which is an important adaptive parameter to control the width of wavelet in time and the bandwidth of frequency. Hence, the efficiency of selection of proper wavelets is improved by making first a preliminary estimate and refining a local selecting range. After removal of noise wavelets, we integrate useful wavelets which should be firstly executed by adaptive spectral whitening technique. This approach can improve the resolutions of seismic signal and enhance the energy of weak wavelets simultaneously. The application results of real seismic data show this method has a good perspective of application.

  10. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  11. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  12. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  13. An Implementation of the Frequency Matching Method

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer

    During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One of these......During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One...... of these methods is the recently proposed Frequency Matching method to compute the maximum a posteriori model of an inverse problem where multiple-point statistics, learned from a training image, is used to formulate a closed form expression for an a priori probability density function. This paper discusses...... aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method...

  14. Dynamic Modeling of Starting Aerodynamics and Stage Matching in an Axi-Centrifugal Compressor

    Science.gov (United States)

    Wilkes, Kevin; OBrien, Walter F.; Owen, A. Karl

    1996-01-01

    A DYNamic Turbine Engine Compressor Code (DYNTECC) has been modified to model speed transients from 0-100% of compressor design speed. The impetus for this enhancement was to investigate stage matching and stalling behavior during a start sequence as compared to rotating stall events above ground idle. The model can simulate speed and throttle excursions simultaneously as well as time varying bleed flow schedules. Results of a start simulation are presented and compared to experimental data obtained from an axi-centrifugal turboshaft engine and companion compressor rig. Stage by stage comparisons reveal the front stages to be operating in or near rotating stall through most of the start sequence. The model matches the starting operating line quite well in the forward stages with deviations appearing in the rearward stages near the start bleed. Overall, the performance of the model is very promising and adds significantly to the dynamic simulation capabilities of DYNTECC.

  15. Probabilistic seismic history matching using binary images

    Science.gov (United States)

    Davolio, Alessandra; Schiozer, Denis Jose

    2018-02-01

    Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new

  16. Applications of the soft computing in the automated history matching

    Energy Technology Data Exchange (ETDEWEB)

    Silva, P.C.; Maschio, C.; Schiozer, D.J. [Unicamp (Brazil)

    2006-07-01

    Reservoir management is a research field in petroleum engineering that optimizes reservoir performance based on environmental, political, economic and technological criteria. Reservoir simulation is based on geological models that simulate fluid flow. Models must be constantly corrected to yield the observed production behaviour. The process of history matching is controlled by the comparison of production data, well test data and measured data from simulations. Parametrization, objective function analysis, sensitivity analysis and uncertainty analysis are important steps in history matching. One of the main challenges facing automated history matching is to develop algorithms that find the optimal solution in multidimensional search spaces. Optimization algorithms can be either global optimizers that work with noisy multi-modal functions, or local optimizers that cannot work with noisy multi-modal functions. The problem with global optimizers is the very large number of function calls, which is an inconvenience due to the long reservoir simulation time. For that reason, techniques such as least squared, thin plane spline, kriging and artificial neural networks (ANN) have been used as substitutes to reservoir simulators. This paper described the use of optimization algorithms to find optimal solution in automated history matching. Several ANN were used, including the generalized regression neural network, fuzzy system with subtractive clustering and radial basis network. The UNIPAR soft computing method was used along with a modified Hooke- Jeeves optimization method. Two case studies with synthetic and real reservoirs are examined. It was concluded that the combination of global and local optimization has the potential to improve the history matching process and that the use of substitute models can reduce computational efforts. 15 refs., 11 figs.

  17. Model and Simulation of a Tunable Birefringent Fiber Using Capillaries Filled with Liquid Ethanol for Magnetic Quasiphase Matching In-Fiber Isolator

    Directory of Open Access Journals (Sweden)

    Clint Zeringue

    2010-01-01

    Full Text Available A technique to tune a magnetic quasi-phase matching in-fiber isolator through the application of stress induced by two mutually orthogonal capillary tubes filled with liquid ethanol is investigated numerically. The results show that it is possible to “tune” the birefringence in these fibers over a limited range depending on the temperature at which the ethanol is loaded into the capillaries. Over this tuning range, the thermal sensitivity of the birefringence is an order-of-magnitude lower than conventional fibers, making this technique well suited for magnetic quasi-phase matching.

  18. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  19. A new frequency matching technique for FRF-based model updating

    Science.gov (United States)

    Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng

    2017-05-01

    Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.

  20. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  1. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy

    Science.gov (United States)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-01

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  2. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.

    Science.gov (United States)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-06

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  3. 2D and 3D modeling of wave propagation in cold magnetized plasma near the Tore Supra ICRH antenna relying on the perfecly matched layer technique

    International Nuclear Information System (INIS)

    Jacquot, J; Colas, L; Clairet, F; Goniche, M; Hillairet, J; Lombard, G; Heuraux, S; Milanesio, D

    2013-01-01

    A novel method to simulate ion cyclotron wave coupling in the edge of a tokamak plasma with the finite element technique is presented. It is applied in the commercial software COMSOL Multiphysics. Its main features include the perfectly matched layer (PML) technique to emulate radiating boundary conditions beyond a critical cutoff layer for the fast wave (FW), full-wave propagation across the inhomogeneous cold peripheral plasma and a detailed description of the wave launcher geometry. The PML technique, while widely used in numerical simulations of wave propagation, has scarcely been used for magnetized plasmas, due to specificities of this gyrotropic material. A versatile PML formulation, valid for full dielectric tensors, is summarized and interpreted as wave propagation in an artificial medium. The behavior of this technique has been checked for plane waves on homogeneous plasmas. Wave reflection has been quantified and compared to analytical predictions. An incompatibility issue for adapting the PML for forward (FW) and backward (slow wave (SW)) propagating waves simultaneously has been evidenced. In a tokamak plasma, this critical issue is overcome by taking advantage of the inhomogeneous density profile to reflect the SW before it reaches the PML. The simulated coupling properties of a Tore Supra ion cyclotron resonance heating (ICRH) antenna have been compared to experimental values in a situation of good single-pass absorption. The necessary antenna elements to include in the geometry to recover the coupling properties obtained experimentally are also discussed. (paper)

  4. Renewal of Road Networks Using Map-matching Technique of Trajectories

    Directory of Open Access Journals (Sweden)

    WU Tao

    2017-04-01

    Full Text Available The road network with complete and accurate information, as one of the key foundations of Smart City, bears significance in fields like urban planning, traffic managing and public traveling, et al. However, long manufacturing period of road network data, based on traditional surveying methods, often leaves it in an inconsistent state with the latest situation. Recently, positioning techniques ubiquitously used in mobile devices has been gradually coming into focus for domestic and overseas scholars. Currently, most of approaches, generating or updating road networks from mobile location information, are to compute with GPS trajectory data directly by various algorithms, which lead to expensive consumption of computational resources in case of mass GPS data covering large-scale areas. For this reason, we propose a spiral update strategy of road network data based on map-matching technology, which follows a “identify→analyze→extract→update” process. The main idea is to detect condemned road segments of existing road network data with the help of HMM for each trajectory input, as well as repair them, on the local scale, by extracting new road information from trajectory data.The proposed approach avoids computing on the entire dataset of trajectory data for road segments. Instead, it updates information of existing road network data by means of focalizing on the minimum range of potential condemned segments. We evaluated the performance of our proposals using GPS traces collected on taxies and OpenStreetMap(OSM road networks covering urban areas of Wuhan City.

  5. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    Directory of Open Access Journals (Sweden)

    Eva Lykkegaard

    2016-04-01

    Full Text Available Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics. Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did.

  6. An improved perfectly matched layer for the eigenmode expansion technique

    DEFF Research Database (Denmark)

    Gregersen, Niels; Mørk, Jesper

    2008-01-01

    be suppressed by introducing a perfectly matched layer (PML) using e.g. complex coordinate stretching of the cylinder radius. However, the traditional PML suffers from an artificial field divergence limiting its usefulness. We show that the choice of a constant cylinder radius leads to mode profiles...

  7. Toward Practical Secure Stable Matching

    Directory of Open Access Journals (Sweden)

    Riazi M. Sadegh

    2017-01-01

    Full Text Available The Stable Matching (SM algorithm has been deployed in many real-world scenarios including the National Residency Matching Program (NRMP and financial applications such as matching of suppliers and consumers in capital markets. Since these applications typically involve highly sensitive information such as the underlying preference lists, their current implementations rely on trusted third parties. This paper introduces the first provably secure and scalable implementation of SM based on Yao’s garbled circuit protocol and Oblivious RAM (ORAM. Our scheme can securely compute a stable match for 8k pairs four orders of magnitude faster than the previously best known method. We achieve this by introducing a compact and efficient sub-linear size circuit. We even further decrease the computation cost by three orders of magnitude by proposing a novel technique to avoid unnecessary iterations in the SM algorithm. We evaluate our implementation for several problem sizes and plan to publish it as open-source.

  8. MR angiography with a matched filter

    International Nuclear Information System (INIS)

    De Castro, J.B.; Riederer, S.J.; Lee, J.N.

    1987-01-01

    The technique of matched filtering was applied to a series of cine MR images. The filter was devised to yield a subtraction angiographic image in which direct current components present in the cine series are removed and the signal-to-noise ratio (S/N) of the vascular structures is optimized. The S/N of a matched filter was compared with that of a simple subtraction, in which an image with high flow is subtracted from one with low flow. Experimentally, a range of results from minimal improvement to significant (60%) improvement in S/N was seen in the comparisons of matched filtered subtraction with simple subtraction

  9. Halo control, beam matching, and new dynamical variables for beam distributions

    International Nuclear Information System (INIS)

    Lysenko, W.; Parsa, Z.

    1997-01-01

    We present the status of our work on physics models that relate release to the understanding and control of beam halo, which is a cause of particle loss in high power ion linear accelerators. We can minimize these particle losses, even in the presence of nonlinearities, by ensuring the beam is matched to high order. Our goal is to determine new dynamical variables that enable us to more directly solve for the evolution of the halo. We considered moments and several new variables, using a Lie-Poisson formulation whenever possible. Using symbolic techniques, we computed high-order matches and mode invariants (analogs of moment invariants) in the new variables. A promising new development developments is that of the variables we call weighted moments, which allow us to compute high-order nonlinear effects (like halos) while making use of well-developed existing results and computational techniques developed for studying first order effects. copyright 1997 American Institute of Physics

  10. Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing

    Science.gov (United States)

    Li-Chee-Ming, J.; Armenakis, C.

    2017-05-01

    This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.

  11. THE Economics of Match-Fixing

    OpenAIRE

    Caruso, Raul

    2007-01-01

    The phenomenon of match-fixing does constitute a constant element of sport contests. This paper presents a simple formal model in order to explain it. The intuition behind is that an asymmetry in the evaluation of the stake is the key factor leading to match-fixing. In sum, this paper considers a partial equilibrium model of contest where two asymmetric, rational and risk-neutral opponents evaluate differently a contested stake. Differently from common contest models, agents have the option ...

  12. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  13. Circuit and Measurement Technique for Radiation Induced Drift in Precision Capacitance Matching

    Science.gov (United States)

    Prasad, Sudheer; Shankar, Krishnamurthy Ganapathy

    2013-04-01

    In the design of radiation tolerant precision ADCs targeted for space market, a matched capacitor array is crucial. The drift of capacitance ratios due to radiation should be small enough not to cause linearity errors. Conventional methods for measuring capacitor matching may not achieve the desired level of accuracy due to radiation induced gain errors in the measurement circuits. In this work, we present a circuit and method for measuring capacitance ratio drift to a very high accuracy (<; 1 ppm) unaffected by radiation levels up to 150 krad.

  14. The comparison of Co-60 and 4MV photons matching dosimetry during half-beam technique

    International Nuclear Information System (INIS)

    Cakir, Aydin; Bilge, Hatice; Dadasbilge, Alpar; Kuecuecuek, Halil; Okutan, Murat; Merdan Fayda, Emre

    2005-01-01

    In this phantom study, we tried to compare matching dosimetry differences between half-blocking of Co-60 and asymmetric collimation of the 4MV photons during craniospinal irradiation. The dose distributions are compared and discussed. Firstly, some gaps with different sizes are left between cranial and spinal field borders. Secondly, the fields are overlapped in the same sizes. We irradiate the films located in water-equivalent solid phantoms with Co-60 and 4MV photon beams. This study indicates that the field placement errors in +/- 1mm are acceptable for both Co-60 and 4MV photon energies during craniospinal irradiation with half-beam block technique. Within these limits the dose variations are specified in +/- 5%. However, the setup errors that are more than 1mm are unacceptable for both asymmetric collimation of 4MV photon and half-blocking of Co-60

  15. Exploiting Best-Match Equations for Efficient Reinforcement Learning

    NARCIS (Netherlands)

    van Seijen, Harm; Whiteson, Shimon; van Hasselt, Hado; Wiering, Marco

    This article presents and evaluates best-match learning, a new approach to reinforcement learning that trades off the sample efficiency of model-based methods with the space efficiency of model-free methods. Best-match learning works by approximating the solution to a set of best-match equations,

  16. An Incentive Theory of Matching

    OpenAIRE

    Brown, Alessio J. G.; Merkl, Christian; Snower, Dennis J.

    2010-01-01

    This paper examines the labour market matching process by distinguishing its two component stages: the contact stage, in which job searchers make contact with employers and the selection stage, in which they decide whether to match. We construct a theoretical model explaining two-sided selection through microeconomic incentives. Firms face adjustment costs in responding to heterogeneous variations in the characteristics of workers and jobs. Matches and separations are described through firms'...

  17. Impedance matching wireless power transmission system for biomedical devices.

    Science.gov (United States)

    Lum, Kin Yun; Lindén, Maria; Tan, Tian Swee

    2015-01-01

    For medical application, the efficiency and transmission distance of the wireless power transfer (WPT) are always the main concern. Research has been showing that the impedance matching is one of the critical factors for dealing with the problem. However, there is not much work performed taking both the source and load sides into consideration. Both sides matching is crucial in achieving an optimum overall performance, and the present work proposes a circuit model analysis for design and implementation. The proposed technique was validated against experiment and software simulation. Result was showing an improvement in transmission distance up to 6 times, and efficiency at this transmission distance had been improved up to 7 times as compared to the impedance mismatch system. The system had demonstrated a near-constant transfer efficiency for an operating range of 2cm-12cm.

  18. Hybrid-Based Dense Stereo Matching

    Science.gov (United States)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  19. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  20. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    Science.gov (United States)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  1. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...

  2. Platform pricing in matching markets

    NARCIS (Netherlands)

    Goos, M.; van Cayseele, P.; Willekens, B.

    2011-01-01

    This paper develops a simple model of monopoly platform pricing accounting for two pertinent features of matching markets. 1) The trading process is characterized by search and matching frictions implying limits to positive cross-side network effects and the presence of own-side congestion.

  3. Stroke Lesions in a Large Upper Limb Rehabilitation Trial Cohort Rarely Match Lesions in Common Preclinical Models

    Science.gov (United States)

    Edwardson, Matthew A.; Wang, Ximing; Liu, Brent; Ding, Li; Lane, Christianne J.; Park, Caron; Nelsen, Monica A.; Jones, Theresa A; Wolf, Steven L; Winstein, Carolee J; Dromerick, Alexander W.

    2017-01-01

    Background Stroke patients with mild-moderate upper extremity (UE) motor impairments and minimal sensory and cognitive deficits provide a useful model to study recovery and improve rehabilitation. Laboratory-based investigators use lesioning techniques for similar goals. Objective Determine whether stroke lesions in an UE rehabilitation trial cohort match lesions from the preclinical stroke recovery models used to drive translational research. Methods Clinical neuroimages from 297 participants enrolled in the Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) study were reviewed. Images were characterized based on lesion type (ischemic or hemorrhagic), volume, vascular territory, depth (cortical gray matter, cortical white matter, subcortical), old strokes, and leukoaraiosis. Lesions were compared with those of preclinical stroke models commonly used to study upper limb recovery. Results Among the ischemic stroke participants, median infarct volume was 1.8 mL, with most lesions confined to subcortical structures (61%) including the anterior choroidal artery territory (30%) and the pons (23%). Of ICARE participants, stroke patients, but they represent a clinically and scientifically important subgroup. Compared to lesions in general stroke populations and widely-studied animal models of recovery, ICARE participants had smaller, more subcortically-based strokes. Improved preclinical-clinical translational efforts may require better alignment of lesions between preclinical and human stroke recovery models. PMID:28337932

  4. Sequence Matching Analysis for Curriculum Development

    Directory of Open Access Journals (Sweden)

    Liem Yenny Bendatu

    2015-06-01

    Full Text Available Many organizations apply information technologies to support their business processes. Using the information technologies, the actual events are recorded and utilized to conform with predefined model. Conformance checking is an approach to measure the fitness and appropriateness between process model and actual events. However, when there are multiple events with the same timestamp, the traditional approach unfit to result such measures. This study attempts to develop a sequence matching analysis. Considering conformance checking as the basis of this approach, this proposed approach utilizes the current control flow technique in process mining domain. A case study in the field of educational process has been conducted. This study also proposes a curriculum analysis framework to test the proposed approach. By considering the learning sequence of students, it results some measurements for curriculum development. Finally, the result of the proposed approach has been verified by relevant instructors for further development.

  5. Perfectly matched layers for radio wave propagation in inhomogeneous magnetized plasmas

    International Nuclear Information System (INIS)

    Gondarenko, Natalia A.; Guzdar, Parvez N.; Ossakow, Sidney L.; Bernhardt, Paul A.

    2004-01-01

    We present 1D and 2D numerical models of the propagation of high-frequency (HF) radio waves in inhomogeneous magnetized plasmas. The simulations allow one to describe the process of linear conversion of HF electromagnetic waves into electrostatic waves. The waves, launched from the lower boundary normally or at a specified angle on a layer of a magnetoactive plasma, can undergo linear conversion of the incident O-mode into a Z-mode at appropriate locations in an inhomogeneous prescribed plasma density. The numerical scheme for solving 2D HF wave propagation equations is described. The model employed the Maxwellian perfectly matched layers (PML) technique for approximating nonreflecting boundary conditions. Our numerical studies demonstrate the effectiveness of the PML technique for transparent boundary conditions for an open-domain problem

  6. A perfectly matched layer for fluid-solid problems: Application to ocean-acoustics simulations with solid ocean bottoms

    DEFF Research Database (Denmark)

    Xie, Zhinan; Matzen, René; Cristini, Paul

    2016-01-01

    A time-domain Legendre spectral-element method is described for full-wave simulation of ocean acoustics models, i.e., coupled fluid-solid problems in unbounded or semi-infinite domains, taking into account shear wave propagation in the ocean bottom. The technique can accommodate range-dependent a......A time-domain Legendre spectral-element method is described for full-wave simulation of ocean acoustics models, i.e., coupled fluid-solid problems in unbounded or semi-infinite domains, taking into account shear wave propagation in the ocean bottom. The technique can accommodate range......-dependent and depth-dependent wave speed and density, as well as steep ocean floor topography. For truncation of the infinite domain, to efficiently absorb outgoing waves, a fluid-solid complex-frequency-shifted unsplit perfectly matched layer is introduced based on the complex coordinate stretching technique....... The complex stretching is rigorously taken into account in the derivation of the fluid-solid matching condition inside the absorbing layer, which has never been done before in the time domain. Two implementations are designed: a convolutional formulation and an auxiliary differential equation formulation...

  7. Graph configuration model based evaluation of the education-occupation match.

    Science.gov (United States)

    Gadar, Laszlo; Abonyi, Janos

    2018-01-01

    To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.

  8. Pattern Matching Framework to Estimate the Urgency of Off-Normal Situations in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Jin Soo; Park, Sang Jun; Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyo Jin; Park, Soon Yeol [Korea Hydro and Nuclear Power, Yeonggwang (Korea, Republic of)

    2010-10-15

    According to power plant operators, it was said that they could quite well recognize off-normal situations from an incipient stage and also anticipate the possibility of upcoming trips in case of skilled operators, even though it is difficult to clarify the cause of the off-normal situation. From the interview, we could assure the feasibility of two assumptions for the diagnosis of off-normal conditions: One is that we can predict whether an accidental shutdown happens or not if we observe the early stage when an off-normal starts to grow. The other is the observation at the early stage can provide the remaining time to a trip as well as the cause of such an off-normal situation. For this purpose, the development of on-line monitoring systems using various data processing techniques in nuclear power plants (NPPs) has been the subject of increasing attention and becomes important contributor to improve performance and economics. Many of studies have suggested the diagnostic methodologies. One of representative methods was to use the distance discrimination as a similarity measure, for example, such as the Euclidean distance. A variety of artificial intelligence techniques such as a neural network have been developed as well. In addition, some of these methodologies were to reduce the data dimensions for more effectively work. While sharing the same motivation with the previous achievements, this study proposed non-parametric pattern matching techniques to reduce the uncertainty in pursuance of selection of models and modeling processes. This could be characterized by the following two aspects: First, for overcoming considering only a few typical scenarios in the most of the studies, this study is getting the entire sets of off-normal situations which are anticipated in NPPs, which are created by a full-scope simulator. Second, many of the existing researches adopted the process of forming a diagnosis model which is so-called a training technique or a parametric

  9. Pattern Matching Framework to Estimate the Urgency of Off-Normal Situations in NPPs

    International Nuclear Information System (INIS)

    Shin, Jin Soo; Park, Sang Jun; Heo, Gyun Young; Park, Jin Kyun; Kim, Hyo Jin; Park, Soon Yeol

    2010-01-01

    According to power plant operators, it was said that they could quite well recognize off-normal situations from an incipient stage and also anticipate the possibility of upcoming trips in case of skilled operators, even though it is difficult to clarify the cause of the off-normal situation. From the interview, we could assure the feasibility of two assumptions for the diagnosis of off-normal conditions: One is that we can predict whether an accidental shutdown happens or not if we observe the early stage when an off-normal starts to grow. The other is the observation at the early stage can provide the remaining time to a trip as well as the cause of such an off-normal situation. For this purpose, the development of on-line monitoring systems using various data processing techniques in nuclear power plants (NPPs) has been the subject of increasing attention and becomes important contributor to improve performance and economics. Many of studies have suggested the diagnostic methodologies. One of representative methods was to use the distance discrimination as a similarity measure, for example, such as the Euclidean distance. A variety of artificial intelligence techniques such as a neural network have been developed as well. In addition, some of these methodologies were to reduce the data dimensions for more effectively work. While sharing the same motivation with the previous achievements, this study proposed non-parametric pattern matching techniques to reduce the uncertainty in pursuance of selection of models and modeling processes. This could be characterized by the following two aspects: First, for overcoming considering only a few typical scenarios in the most of the studies, this study is getting the entire sets of off-normal situations which are anticipated in NPPs, which are created by a full-scope simulator. Second, many of the existing researches adopted the process of forming a diagnosis model which is so-called a training technique or a parametric

  10. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  11. Physics-based shape matching for intraoperative image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Suwelack, Stefan, E-mail: suwelack@kit.edu; Röhl, Sebastian; Bodenstedt, Sebastian; Reichard, Daniel; Dillmann, Rüdiger; Speidel, Stefanie [Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Adenauerring 2, Karlsruhe 76131 (Germany); Santos, Thiago dos; Maier-Hein, Lena [Computer-assisted Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Wagner, Martin; Wünscher, Josephine; Kenngott, Hannes; Müller, Beat P. [General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, Heidelberg 69120 (Germany)

    2014-11-01

    method is able to accurately match partial surfaces. Finally, a phantom experiment demonstrates how the method can be combined with stereo endoscopic imaging to provide nonrigid registration during laparoscopic interventions. Conclusions: The PBSM approach for surface matching is fast, robust, and accurate. As the technique is based on a preoperative volumetric FE model, it naturally recovers the position of volumetric structures (e.g., tumors and vessels). It cannot only be used to recover soft-tissue deformations from intraoperative surface models but can also be combined with landmark data from volumetric imaging. In addition to applications in laparoscopic surgery, the method might prove useful in other areas that require soft-tissue registration from sparse intraoperative sensor data (e.g., radiation therapy)

  12. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    Science.gov (United States)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  13. Security and matching of partial fingerprint recognition systems

    Science.gov (United States)

    Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.

    2004-08-01

    Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).

  14. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    Science.gov (United States)

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-04-10

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  15. Transverse Matching Progress Of The SNS Superconducting Linac

    International Nuclear Information System (INIS)

    Zhang, Yan; Cousineau, Sarah M.; Liu, Yun

    2011-01-01

    Experience using laser-wire beam profile measurement to perform transverse beam matching in the SNS superconducting linac is discussed. As the SNS beam power is ramped up to 1 MW, transverse beam matching becomes a concern to control beam loss and residual activation in the linac. In our experiments, however, beam loss is not very sensitive to the matching condition. In addition, we have encountered difficulties in performing a satisfactory transverse matching with the envelope model currently available in the XAL software framework. Offline data analysis from multi-particle tracking simulation shows that the accuracy of the current online model may not be sufficient for modeling the SC linac.

  16. Tackle technique and tackle-related injuries in high-level South African Rugby Union under-18 players: real-match video analysis.

    Science.gov (United States)

    Burger, Nicholas; Lambert, Michael I; Viljoen, Wayne; Brown, James C; Readhead, Clint; Hendricks, Sharief

    2016-08-01

    The high injury rate associated with rugby union is primarily due to the tackle, and poor contact technique has been identified as a risk factor for injury. We aimed to determine whether the tackle technique proficiency scores were different in injurious tackles versus tackles that did not result in injury using real-match scenarios in high-level youth rugby union. Injury surveillance was conducted at the under-18 Craven Week tournaments (2011-2013). Tackle-related injury information was used to identify injury events in the match video footage and non-injury events were identified for the injured player cohort. Injury and non-injury events were scored for technique proficiency and Cohen's effect sizes were calculated and the Student t test (p<0.05) was performed to compare injury versus non-injury scores. The overall mean score for front-on ball-carrier proficiency was 7.17±1.90 and 9.02±2.15 for injury and non-injury tackle events, respectively (effect size=moderate; p<0.05). The overall mean score for side/behind ball-carrier proficiency was 4.09±2.12 and 7.68±1.72 for injury and non-injury tackle events, respectively (effect size=large; p<0.01). The overall mean score for front-on tackler proficiency was 7.00±1.95 and 9.35±2.56 for injury and non-injury tackle events, respectively (effect size=moderate; p<0.05). The overall mean score for side/behind tackler proficiency was 5.47±1.60 and 8.14±1.75 for injury and non-injury tackle events, respectively (effect size=large; p<0.01). Higher overall mean and criterion-specific tackle-related technique scores were associated with a non-injury outcome. The ability to perform well during tackle events may decrease the risk of injury and may manifest in superior performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. The match-to-match variation of match-running in elite female soccer.

    Science.gov (United States)

    Trewin, Joshua; Meylan, César; Varley, Matthew C; Cronin, John

    2018-02-01

    The purpose of this study was to examine the match-to-match variation of match-running in elite female soccer players utilising GPS, using full-match and rolling period analyses. Longitudinal study. Elite female soccer players (n=45) from the same national team were observed during 55 international fixtures across 5 years (2012-2016). Data was analysed using a custom built MS Excel spreadsheet as full-matches and using a rolling 5-min analysis period, for all players who played 90-min matches (files=172). Variation was examined using co-efficient of variation and 90% confidence limits, calculated following log transformation. Total distance per minute exhibited the smallest variation when both the full-match and peak 5-min running periods were examined (CV=6.8-7.2%). Sprint-efforts were the most variable during a full-match (CV=53%), whilst high-speed running per minute exhibited the greatest variation in the post-peak 5-min period (CV=143%). Peak running periods were observed as slightly more variable than full-match analyses, with the post-peak period very-highly variable. Variability of accelerations (CV=17%) and Player Load (CV=14%) was lower than that of high-speed actions. Positional differences were also present, with centre backs exhibiting the greatest variation in high-speed movements (CV=41-65%). Practitioners and researchers should account for within player variability when examining match performances. Identification of peak running periods should be used to assist worst case scenarios. Whilst micro-sensor technology should be further examined as to its viable use within match-analyses. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  18. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  19. INFORMATION SYSTEMS AUDIT CURRICULA CONTENT MATCHING

    Directory of Open Access Journals (Sweden)

    Vasile-Daniel CARDOȘ

    2014-11-01

    Full Text Available Financial and internal auditors must cope with the challenge of performing their mission in technology enhanced environment. In this article we match the information technology description found in the International Federation of Accountants (IFAC and the Institute of Internal Auditors (IIA curricula against the Model Curriculum issued by the Information Systems Audit and Control Association (ISACA. By reviewing these three curricula, we matched the content in the ISACA Model Curriculum with the IFAC International Education Practice Statement 2 and the IIAs’ Global Model Internal Audit Curriculum. In the IFAC and IIA Curriculum there are 16 content elements, out of 19 possible, which match, in their description, the ISACA Model Curriculum’s content. We noticed that a candidate who graduates an IFAC or IIA compliant program acquire IS auditing competences similar to the specific content of the ISACA model curriculum but less than the requirements for a professional information systems auditor.

  20. A method for matching the refractive index and kinematic viscosity of a blood analog for flow visualization in hydraulic cardiovascular models.

    Science.gov (United States)

    Nguyen, T T; Biadillah, Y; Mongrain, R; Brunette, J; Tardif, J C; Bertrand, O F

    2004-08-01

    In this work, we propose a simple method to simultaneously match the refractive index and kinematic viscosity of a circulating blood analog in hydraulic models for optical flow measurement techniques (PIV, PMFV, LDA, and LIF). The method is based on the determination of the volumetric proportions and temperature at which two transparent miscible liquids should be mixed to reproduce the targeted fluid characteristics. The temperature dependence models are a linear relation for the refractive index and an Arrhenius relation for the dynamic viscosity of each liquid. Then the dynamic viscosity of the mixture is represented with a Grunberg-Nissan model of type 1. Experimental tests for acrylic and blood viscosity were found to be in very good agreement with the targeted values (measured refractive index of 1.486 and kinematic viscosity of 3.454 milli-m2/s with targeted values of 1.47 and 3.300 milli-m2/s).

  1. On the use of INS to improve Feature Matching

    Science.gov (United States)

    Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.

    2014-11-01

    The continuous technological improvement of mobile devices opens the frontiers of Mobile Mapping systems to very compact systems, i.e. a smartphone or a tablet. This motivates the development of efficient 3D reconstruction techniques based on the sensors typically embedded in such devices, i.e. imaging sensors, GPS and Inertial Navigation System (INS). Such methods usually exploits photogrammetry techniques (structure from motion) to provide an estimation of the geometry of the scene. Actually, 3D reconstruction techniques (e.g. structure from motion) rely on use of features properly matched in different images to compute the 3D positions of objects by means of triangulation. Hence, correct feature matching is of fundamental importance to ensure good quality 3D reconstructions. Matching methods are based on the appearance of features, that can change as a consequence of variations of camera position and orientation, and environment illumination. For this reason, several methods have been developed in recent years in order to provide feature descriptors robust (ideally invariant) to such variations, e.g. Scale-Invariant Feature Transform (SIFT), Affine SIFT, Hessian affine and Harris affine detectors, Maximally Stable Extremal Regions (MSER). This work deals with the integration of information provided by the INS in the feature matching procedure: a previously developed navigation algorithm is used to constantly estimate the device position and orientation. Then, such information is exploited to estimate the transformation of feature regions between two camera views. This allows to compare regions from different images but associated to the same feature as seen by the same point of view, hence significantly easing the comparison of feature characteristics and, consequently, improving matching. SIFT-like descriptors are used in order to ensure good matching results in presence of illumination variations and to compensate the approximations related to the estimation

  2. Shape-matching soft mechanical metamaterials.

    Science.gov (United States)

    Mirzaali, M J; Janbaz, S; Strano, M; Vergani, L; Zadpoor, A A

    2018-01-17

    Architectured materials with rationally designed geometries could be used to create mechanical metamaterials with unprecedented or rare properties and functionalities. Here, we introduce "shape-matching" metamaterials where the geometry of cellular structures comprising auxetic and conventional unit cells is designed so as to achieve a pre-defined shape upon deformation. We used computational models to forward-map the space of planar shapes to the space of geometrical designs. The validity of the underlying computational models was first demonstrated by comparing their predictions with experimental observations on specimens fabricated with indirect additive manufacturing. The forward-maps were then used to devise the geometry of cellular structures that approximate the arbitrary shapes described by random Fourier's series. Finally, we show that the presented metamaterials could match the contours of three real objects including a scapula model, a pumpkin, and a Delft Blue pottery piece. Shape-matching materials have potential applications in soft robotics and wearable (medical) devices.

  3. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  4. ECMOR 4. 4th European conference on the mathematics of oil recovery. Topic E: History match and recovery optimization. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1994-01-01

    The report with collected proceedings from a conference, deals with mathematics of oil recovery with the focus on history match and recovery optimization. Topics of proceedings are as follow: Calculating optimal parameters for history matching; new technique to improve the efficiency of history matching of full-field models; flow constrained reservoir characterization using Bayesian inversion; analysis of multi-well pressure transient data; new approach combining neural networks and simulated annealing for solving petroleum inverse problems; automatic history matching by use of response surfaces and experimental design; determining the optimum location of a production well in oil reservoirs. Seven papers are prepared. 108 refs., 45 figs., 12 tabs.

  5. MATCHING IN INFORMAL FINANCIAL INSTITUTIONS.

    Science.gov (United States)

    Eeckhout, Jan; Munshi, Kaivan

    2010-09-01

    This paper analyzes an informal financial institution that brings heterogeneous agents together in groups. We analyze decentralized matching into these groups, and the equilibrium composition of participants that consequently arises. We find that participants sort remarkably well across the competing groups, and that they re-sort immediately following an unexpected exogenous regulatory change. These findings suggest that the competitive matching model might have applicability and bite in other settings where matching is an important equilibrium phenomenon. (JEL: O12, O17, G20, D40).

  6. Novel method for the production of spin-aligned RI beams in projectile fragmentation reaction with the dispersion matching technique

    Energy Technology Data Exchange (ETDEWEB)

    Ichikawa, Y., E-mail: yuichikawa@phys.titech.ac.jp [Tokyo Institute of Technology, Department of Physics (Japan); Ueno, H. [RIKEN Nishina Center (Japan); Ishii, Y. [Tokyo Institute of Technology, Department of Physics (Japan); Furukawa, T. [Tokyo Metropolitan University, Department of Physics (Japan); Yoshimi, A. [Okayama University, Research Core for Extreme Quantum World (Japan); Kameda, D.; Watanabe, H.; Aoi, N. [RIKEN Nishina Center (Japan); Asahi, K. [Tokyo Institute of Technology, Department of Physics (Japan); Balabanski, D. L. [Bulgarian Academy of Sciences, Institute for Nuclear Research and Nuclear Energy (Bulgaria); Chevrier, R.; Daugas, J. M. [CEA, DAM, DIF (France); Fukuda, N. [RIKEN Nishina Center (Japan); Georgiev, G. [CSNSM, IN2P3-CNRS, Universite Paris-sud (France); Hayashi, H.; Iijima, H. [Tokyo Institute of Technology, Department of Physics (Japan); Inabe, N. [RIKEN Nishina Center (Japan); Inoue, T. [Tokyo Institute of Technology, Department of Physics (Japan); Ishihara, M.; Kubo, T. [RIKEN Nishina Center (Japan); and others

    2013-05-15

    A novel method to produce spin-aligned rare-isotope (RI) beam has been developed, that is the two-step projectile fragmentation method with a technique of dispersion matching. The present method was verified in an experiment at the RIKEN RIBF, where an RI beam of {sup 32}Al with spin alignment of 8(1) % was successfully produced from a primary beam of {sup 48}Ca, with {sup 33}Al as an intermediate nucleus. Figure of merit of the present method was found to be improved by a factor larger than 50 compared with a conventional method employing single-step projectile fragmentation.

  7. The match-mismatch model of emotion processing styles and emotion regulation strategies in fibromyalgia.

    NARCIS (Netherlands)

    Geenen, R.; Ooijen-van der Linden, L. van; Lumley, M.A.; Bijlsma, J.W.J.; Middendorp, H. van

    2012-01-01

    OBJECTIVE: Individuals differ in their style of processing emotions (e.g., experiencing affects intensely or being alexithymic) and their strategy of regulating emotions (e.g., expressing or reappraising). A match-mismatch model of emotion processing styles and emotion regulation strategies is

  8. PENGEMBANGAN MEDIA PERMAINAN KARTU GAMBAR DENGAN TEKNIK MAKE A MATCH UNTUK KELAS I SD

    Directory of Open Access Journals (Sweden)

    Asih Mardati

    2015-07-01

    Abstract The aims of this research are to determine picture card media with make a match tehniques suitable to students and determine the effectiveness of picture card game media to the thematic-integrative learning with make a match technique for students class 1 Primary School. Type of this research is the research and development. Development is done referring to Dick and Carey model of development through 9 stages. Subjects research trial was students class 1 Primary School Percobaan 3 Pakem 2013/2014 school year consisting of 9 students from class 1B on a small group test and 29 students in the class 1A on a large group test. The results of experts, assessment showed picture card game media with make a match on techniques of make thematic-integrative learning fit for use with “excellent category”. Small group trial get positive respond as “good criteria”. Big group trial tested effectiveness and product-used practicality. Effectiveness in terms result of student study showed different mean 81.41 become 85.12. Any increased result of student study showed significance value 0,001 with taraf significance 0,005. Practicality of media-used showed from teacher observation results obtained percentage of 95% and the observation of the students obtained 88.75%, respectively with the same practicality criteria, namely "Highly Practical". Keywords: picture card game media, make a match, thematic-integrative learning.

  9. INFORMATION SYSTEMS AUDIT CURRICULA CONTENT MATCHING

    OpenAIRE

    Vasile-Daniel CARDOȘ; Ildikó Réka CARDOȘ

    2014-01-01

    Financial and internal auditors must cope with the challenge of performing their mission in technology enhanced environment. In this article we match the information technology description found in the International Federation of Accountants (IFAC) and the Institute of Internal Auditors (IIA) curricula against the Model Curriculum issued by the Information Systems Audit and Control Association (ISACA). By reviewing these three curricula, we matched the content in the ISACA Model Curriculum wi...

  10. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  11. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  12. Quantification of intervertebral displacement with a novel MRI-based modeling technique: Assessing measurement bias and reliability with a porcine spine model.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Goubeaux, Craig; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian C

    2017-05-01

    The purpose of this study was to develop a novel magnetic resonance imaging (MRI)-based modeling technique for measuring intervertebral displacements. Here, we present the measurement bias and reliability of the developmental work using a porcine spine model. Porcine lumbar vertebral segments were fitted in a custom-built apparatus placed within an externally calibrated imaging volume of an open-MRI scanner. The apparatus allowed movement of the vertebrae through pre-assigned magnitudes of sagittal and coronal translation and rotation. The induced displacements were imaged with static (T 1 ) and fast dynamic (2D HYCE S) pulse sequences. These images were imported into animation software, in which these images formed a background 'scene'. Three-dimensional models of vertebrae were created using static axial scans from the specimen and then transferred into the animation environment. In the animation environment, the user manually moved the models (rotoscoping) to perform model-to-'scene' matching to fit the models to their image silhouettes and assigned anatomical joint axes to the motion-segments. The animation protocol quantified the experimental translation and rotation displacements between the vertebral models. Accuracy of the technique was calculated as 'bias' using a linear mixed effects model, average percentage error and root mean square errors. Between-session reliability was examined by computing intra-class correlation coefficients (ICC) and the coefficient of variations (CV). For translation trials, a constant bias (β 0 ) of 0.35 (±0.11) mm was detected for the 2D HYCE S sequence (p=0.01). The model did not demonstrate significant additional bias with each mm increase in experimental translation (β 1 Displacement=0.01mm; p=0.69). Using the T 1 sequence for the same assessments did not significantly change the bias (p>0.05). ICC values for the T 1 and 2D HYCE S pulse sequences were 0.98 and 0.97, respectively. For rotation trials, a constant bias (

  13. A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions

    International Nuclear Information System (INIS)

    Griffond, J; Soulard, O; Souffland, D

    2010-01-01

    To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.

  14. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    Science.gov (United States)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  15. An Improved Ant Colony Matching by Using Discrete Curve Evolution

    OpenAIRE

    Saadi, Younes; Sari, Eka,; Herawan, Tutut

    2014-01-01

    Part 1: Information & Communication Technology-EurAsia Conference 2014, ICT-EurAsia 2014; International audience; In this paper we present an improved Ant Colony Optimization (ACO) for contour matching, which can be used to match 2D shapes. Discrete Curve Evolution (DCE) technique is used to simplify the extracted contour. In order to find the best correspondence between shapes, the match process is formulated as a Quadratic Assignment Problem (QAP) and resolved by using Ant Colony Optimizati...

  16. Rapid matching of stereo vision based on fringe projection profilometry

    Science.gov (United States)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  17. PREDICTING THE MATCH OUTCOME IN ONE DAY INTERNATIONAL CRICKET MATCHES, WHILE THE GAME IS IN PROGRESS

    Directory of Open Access Journals (Sweden)

    Michael Bailey

    2006-12-01

    Full Text Available Millions of dollars are wagered on the outcome of one day international (ODI cricket matches, with a large percentage of bets occurring after the game has commenced. Using match information gathered from all 2200 ODI matches played prior to January 2005, a range of variables that could independently explain statistically significant proportions of variation associated with the predicted run totals and match outcomes were created. Such variables include home ground advantage, past performances, match experience, performance at the specific venue, performance against the specific opposition, experience at the specific venue and current form. Using a multiple linear regression model, prediction variables were numerically weighted according to statistical significance and used to predict the match outcome. With the use of the Duckworth-Lewis method to determine resources remaining, at the end of each completed over, the predicted run total of the batting team could be updated to provide a more accurate prediction of the match outcome. By applying this prediction approach to a holdout sample of matches, the efficiency of the "in the run" wagering market could be assessed. Preliminary results suggest that the market is prone to overreact to events occurring throughout the course of the match, thus creating brief inefficiencies in the wagering market

  18. History Matching of 4D Seismic Data Attributes using the Ensemble Kalman Filter

    KAUST Repository

    Ravanelli, Fabio M.

    2013-05-01

    One of the most challenging tasks in the oil industry is the production of reliable reservoir forecast models. Because of different sources of uncertainties the numerical models employed are often only crude approximations of the reality. This problem is tackled by the conditioning of the model with production data through data assimilation. This process is known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use of time-lapse seismic data and automated history matching software tools. One of the most promising data assimilation techniques employed in the oil industry is the ensemble Kalman filter (EnKF) because its ability to deal with highly non-linear models, low computational cost and easy computational implementation when compared with other methods. A synthetic reservoir model was used in a history matching study designed to predict the peak production allowing decision makers to properly plan field development actions. If only production data is assimilated, a total of 12 years of historical data is required to properly characterize the production uncertainty and consequently the correct moment to take actions and decommission the field. However if time-lapse seismic data is available this conclusion can be reached 4 years in advance due to the additional fluid displacement information obtained with the seismic data. Production data provides geographically sparse data in contrast with seismic data which are sparse in time. Several types of seismic attributes were tested in this study. Poisson’s ratio proved to be the most sensitive attribute to fluid displacement. In practical applications, however the use of this attribute is usually avoided due to poor quality of the data. Seismic impedance tends to be more reliable. Finally, a new conceptual idea was proposed to obtain time-lapse information for a history matching study. The use of crosswell time-lapse seismic

  19. Equilibrium and matching under price controls

    NARCIS (Netherlands)

    Herings, P.J.J.

    2015-01-01

    The paper considers a one-to-one matching with contracts model in the presence of price controls. This set-up contains two important streams in the matching literature, those with and those without monetary transfers, as special cases and allows for intermediate cases with some restrictions on the

  20. Elastic Minutiae Matching by Means of Thin-Plate Spline Models

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2002-01-01

    This paper presents a novel minutiae matching method that deals with elastic distortions by normalizing the shape of the test fingerprint with respect to the template. The method first determines possible matching minutiae pairs by means of comparing local neighborhoods of the minutiae. Next a

  1. Matching of experimental and statistical-model thermonuclear reaction rates at high temperatures

    International Nuclear Information System (INIS)

    Newton, J. R.; Longland, R.; Iliadis, C.

    2008-01-01

    We address the problem of extrapolating experimental thermonuclear reaction rates toward high stellar temperatures (T>1 GK) by using statistical model (Hauser-Feshbach) results. Reliable reaction rates at such temperatures are required for studies of advanced stellar burning stages, supernovae, and x-ray bursts. Generally accepted methods are based on the concept of a Gamow peak. We follow recent ideas that emphasized the fundamental shortcomings of the Gamow peak concept for narrow resonances at high stellar temperatures. Our new method defines the effective thermonuclear energy range (ETER) by using the 8th, 50th, and 92nd percentiles of the cumulative distribution of fractional resonant reaction rate contributions. This definition is unambiguous and has a straightforward probability interpretation. The ETER is used to define a temperature at which Hauser-Feshbach rates can be matched to experimental rates. This matching temperature is usually much higher compared to previous estimates that employed the Gamow peak concept. We suggest that an increased matching temperature provides more reliable extrapolated reaction rates since Hauser-Feshbach results are more trustwhorthy the higher the temperature. Our ideas are applied to 21 (p,γ), (p,α), and (α,γ) reactions on A=20-40 target nuclei. For many of the cases studied here, our extrapolated reaction rates at high temperatures differ significantly from those obtained using the Gamow peak concept

  2. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  3. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  4. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources.

    Science.gov (United States)

    Sudhir, Dass; Bandyopadhyay, M; Chakraborty, A

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  5. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  6. IMLS-SLAM: scan-to-model matching based on 3D data

    OpenAIRE

    Deschaud, Jean-Emmanuel

    2018-01-01

    The Simultaneous Localization And Mapping (SLAM) problem has been well studied in the robotics community, especially using mono, stereo cameras or depth sensors. 3D depth sensors, such as Velodyne LiDAR, have proved in the last 10 years to be very useful to perceive the environment in autonomous driving, but few methods exist that directly use these 3D data for odometry. We present a new low-drift SLAM algorithm based only on 3D LiDAR data. Our method relies on a scan-to-model matching framew...

  7. Matching theory

    CERN Document Server

    Plummer, MD

    1986-01-01

    This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.

  8. Poor textural image tie point matching via graph theory

    Science.gov (United States)

    Yuan, Xiuxiao; Chen, Shiyu; Yuan, Wei; Cai, Yang

    2017-07-01

    Feature matching aims to find corresponding points to serve as tie points between images. Robust matching is still a challenging task when input images are characterized by low contrast or contain repetitive patterns, occlusions, or homogeneous textures. In this paper, a novel feature matching algorithm based on graph theory is proposed. This algorithm integrates both geometric and radiometric constraints into an edge-weighted (EW) affinity tensor. Tie points are then obtained by high-order graph matching. Four pairs of poor textural images covering forests, deserts, bare lands, and urban areas are tested. For comparison, three state-of-the-art matching techniques, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), and features from accelerated segment test (FAST), are also used. The experimental results show that the matching recall obtained by SIFT, SURF, and FAST varies from 0 to 35% in different types of poor textures. However, through the integration of both geometry and radiometry and the EW strategy, the recall obtained by the proposed algorithm is better than 50% in all four image pairs. The better matching recall improves the number of correct matches, dispersion, and positional accuracy.

  9. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  10. A Condition Number for Non-Rigid Shape Matching

    KAUST Repository

    Ovsjanikov, Maks

    2011-08-01

    © 2011 The Author(s). Despite the large amount of work devoted in recent years to the problem of non-rigid shape matching, practical methods that can successfully be used for arbitrary pairs of shapes remain elusive. In this paper, we study the hardness of the problem of shape matching, and introduce the notion of the shape condition number, which captures the intuition that some shapes are inherently more difficult to match against than others. In particular, we make a connection between the symmetry of a given shape and the stability of any method used to match it while optimizing a given distortion measure. We analyze two commonly used classes of methods in deformable shape matching, and show that the stability of both types of techniques can be captured by the appropriate notion of a condition number. We also provide a practical way to estimate the shape condition number and show how it can be used to guide the selection of landmark correspondences between shapes. Thus we shed some light on the reasons why general shape matching remains difficult and provide a way to detect and mitigate such difficulties in practice.

  11. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  12. Using crosswell data to enhance history matching

    KAUST Repository

    Ravanelli, Fabio M.; Hoteit, Ibrahim

    2014-01-01

    of the reality. This problem is mitigated by conditioning the model with data through data assimilation, a process known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use

  13. Constructed-Response Matching to Sample and Spelling Instruction.

    Science.gov (United States)

    Dube, William V.; And Others

    1991-01-01

    This paper describes a computer-based spelling program grounded in programed instructional techniques and using constructed-response matching-to-sample procedures. Following use of the program, two mentally retarded men successfully spelled previously misspelled words. (JDD)

  14. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2004-08-31

    In the probabilistic approach for history matching, the information from the dynamic data is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, fluid properties, well configuration, flow constrains on wells etc. This implies probabilistic approach should then update different regions of the reservoir in different ways. This necessitates delineation of multiple reservoir domains in order to increase the accuracy of the approach. The research focuses on a probabilistic approach to integrate dynamic data that ensures consistency between reservoir models developed from one stage to the next. The algorithm relies on efficient parameterization of the dynamic data integration problem and permits rapid assessment of the updated reservoir model at each stage. The report also outlines various domain decomposition schemes from the perspective of increasing the accuracy of probabilistic approach of history matching. Research progress in three important areas of the project are discussed: {lg_bullet}Validation and testing the probabilistic approach to incorporating production data in reservoir models. {lg_bullet}Development of a robust scheme for identifying reservoir regions that will result in a more robust parameterization of the history matching process. {lg_bullet}Testing commercial simulators for parallel capability and development of a parallel algorithm for history matching.

  15. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    constraints, followed by a refinement course with similarity constraint and robust checking. A series of temporal Single-Lens Reflex (SLR and High-Speed Camera (HSC stereo images captured during the simulated landslide experiment performed on the campus of Tongji University, Shanghai, were employed to illustrate the proposed method, and the dense and reliable image matching results were obtained. Finally, a series of temporal Digital Surface Models (DSM in the landslide process were constructed using the close-range photogrammetry technique, followed by the discussion of the landslide volume changes and surface elevation changes during the simulation experiment.

  16. Broadband electrical impedance matching for piezoelectric ultrasound transducers.

    Science.gov (United States)

    Huang, Haiying; Paramo, Daniel

    2011-12-01

    This paper presents a systematic method for designing broadband electrical impedance matching networks for piezoelectric ultrasound transducers. The design process involves three steps: 1) determine the equivalent circuit of the unmatched piezoelectric transducer based on its measured admittance; 2) design a set of impedance matching networks using a computerized Smith chart; and 3) establish the simulation model of the matched transducer to evaluate the gain and bandwidth of the impedance matching networks. The effectiveness of the presented approach is demonstrated through the design, implementation, and characterization of impedance matching networks for a broadband acoustic emission sensor. The impedance matching network improved the power of the acquired signal by 9 times.

  17. Signature detection and matching for document image retrieval.

    Science.gov (United States)

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  18. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    Science.gov (United States)

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  19. Automated side-chain model building and sequence assignment by template matching.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  20. Explaining Match Outcome During The Men’s Basketball Tournament at The Olympic Games

    Science.gov (United States)

    Leicht, Anthony S.; Gómez, Miguel A.; Woods, Carl T.

    2017-01-01

    In preparation for the Olympics, there is a limited opportunity for coaches and athletes to interact regularly with team performance indicators providing important guidance to coaches for enhanced match success at the elite level. This study examined the relationship between match outcome and team performance indicators during men’s basketball tournaments at the Olympic Games. Twelve team performance indicators were collated from all men’s teams and matches during the basketball tournament of the 2004-2016 Olympic Games (n = 156). Linear and non-linear analyses examined the relationship between match outcome and team performance indicator characteristics; namely, binary logistic regression and a conditional interference (CI) classification tree. The most parsimonious logistic regression model retained ‘assists’, ‘defensive rebounds’, ‘field-goal percentage’, ‘fouls’, ‘fouls against’, ‘steals’ and ‘turnovers’ (delta AIC winning (93.2%). Match outcome during the men’s basketball tournaments at the Olympic Games was identified by a unique combination of performance indicators. Despite the average model accuracy being marginally higher for the logistic regression analysis, the CI classification tree offered a greater practical utility for coaches through its resolution of non-linear phenomena to guide team success. Key points A unique combination of team performance indicators explained 93.2% of winning observations in men’s basketball at the Olympics. Monitoring of these team performance indicators may provide coaches with the capability to devise multiple game plans or strategies to enhance their likelihood of winning. Incorporation of machine learning techniques with team performance indicators may provide a valuable and strategic approach to explain patterns within multivariate datasets in sport science. PMID:29238245

  1. Improving the precision of the keyword-matching pornographic text filtering method using a hybrid model.

    Science.gov (United States)

    Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong

    2004-09-01

    With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.

  2. Individuation instructions decrease the Cross-Race Effect in a face matching task

    Directory of Open Access Journals (Sweden)

    2015-09-01

    Conclusions: Individuation instructions are an effective moderator of the CRE even within a face matching paradigm. Since unfamiliar face matching tasks most closely simulate document verification tasks, specifically passport screening, instructional techniques such as these may improve task performance within applied settings of significant practical importance.

  3. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  4. The Interaction Between Schema Matching and Record Matching in Data Integration

    KAUST Repository

    Gu, Binbin

    2016-09-20

    Schema Matching (SM) and Record Matching (RM) are two necessary steps in integrating multiple relational tables of different schemas, where SM unifies the schemas and RM detects records referring to the same real-world entity. The two processes have been thoroughly studied separately, but few attention has been paid to the interaction of SM and RM. In this work, we find that, even alternating them in a simple manner, SM and RM can benefit from each other to reach a better integration performance (i.e., in terms of precision and recall). Therefore, combining SM and RM is a promising solution for improving data integration. To this end, we define novel matching rules for SM and RM, respectively, that is, every SM decision is made based on intermediate RM results, and vice versa, such that SM and RM can be performed alternately. The quality of integration is guaranteed by a Matching Likelihood Estimation model and the control of semantic drift, which prevent the effect of mismatch magnification. To reduce the computational cost, we design an index structure based on q-grams and a greedy search algorithm that can reduce around 90 percent overhead of the interaction. Extensive experiments on three data collections show that the combination and interaction between SM and RM significantly outperforms previous works that conduct SM and RM separately.

  5. Managing employee creativity and health in nursing homes : the moderating role of matching job resources and matching occupational rewards

    NARCIS (Netherlands)

    de Jonge, J.; Gevers, J.M.P.; Dollard, M.F.

    2014-01-01

    Health care staff in nursing homes are facing increasingly high job demands at work, which can have a detrimental impact on their health and work motivation. The Demand-Induced Strain Compensation (DISC) Model offers a theoretical framework to study how matching job resources and matching

  6. A photoacoustic technique to measure the properties of single cells

    Science.gov (United States)

    Strohm, Eric M.; Berndl, Elizabeth S. L.; Kolios, Michael C.

    2013-03-01

    We demonstrate a new technique to non-invasively determine the diameter and sound speed of single cells using a combined ultrasonic and photoacoustic technique. Two cell lines, B16-F1 melanoma cells and MCF7 breast cancer cells were examined using this technique. Using a 200 MHz transducer, the ultrasound backscatter from a single cell in suspension was recorded. Immediately following, the cell was irradiated with a 532 nm laser and the resulting photoacoustic wave recorded by the same transducer. The melanoma cells contain optically absorbing melanin particles, which facilitated photoacoustic wave generation. MCF7 cells have negligible optical absorption at 532 nm; the cells were permeabilized and stained with trypan blue prior to measurements. The measured ultrasound and photoacoustic power spectra were compared to theoretical equations with the cell diameter and sound speed as variables (Anderson scattering model for ultrasound, and a thermoelastic expansion model for photoacoustics). The diameter and sound speed were extracted from the models where the spectral shape matched the measured signals. However the photoacoustic spectrum for the melanoma cell did not match theory, which is likely because melanin particles are located around the cytoplasm, and not within the nucleus. Therefore a photoacoustic finite element model of a cell was developed where the central region was not used to generate a photoacoustic wave. The resulting power spectrum was in better agreement with the measured signal than the thermoelastic expansion model. The MCF7 cell diameter obtained using the spectral matching method was 17.5 μm, similar to the optical measurement of 16 μm, while the melanoma cell diameter obtained was 22 μm, similar to the optical measurement of 21 μm. The sound speed measured from the MCF7 and melanoma cell was 1573 and 1560 m/s, respectively, which is within acceptable values that have been published in literature.

  7. The influence of successive matches on match-running performance during an under-23 international soccer tournament: The necessity of individual analysis.

    Science.gov (United States)

    Varley, Matthew C; Di Salvo, Valter; Modonutti, Mattia; Gregson, Warren; Mendez-Villanueva, Alberto

    2018-03-01

    This study investigated the effects of successive matches on match-running in elite under-23 soccer players during an international tournament. Match-running data was collected using a semi-automated multi-camera tracking system during an international under-23 tournament from all participating outfield players. Players who played 100% of all group stage matches were included (3 matches separated by 72 h, n = 44). Differences in match-running performance between matches were identified using a generalised linear mixed model. There were no clear effects for total, walking, jogging, running, high-speed running and sprinting distance between matches 1 and 3 (effect size (ES); -0.32 to 0.05). Positional analysis found that sprint distance was largely maintained from matches 1 to 3 across all positions. Attackers had a moderate decrease in total, jogging and running distance between matches 1 and 3 (ES; -0.72 to -0.66). Classifying players as increasers or decreasers in match-running revealed that match-running changes are susceptible to individual differences. Sprint performance appears to be maintained over successive matches regardless of playing position. However, reductions in other match-running categories vary between positions. Changes in match-running over successive matches affect individuals differently; thus, players should be monitored on an individual basis.

  8. Effective anisotropy through traveltime and amplitude matching

    KAUST Repository

    Wang, Hui

    2014-08-05

    Introducing anisotropy to seismic wave propagation reveals more realistic physics of our Earth\\'s subsurface as compared to the isotropic assumption. However wavefield modeling, the engine of seismic inverse problems, in anisotropic media still suffers from computational burdens, in particular with complex anisotropy such as transversely isotropic (TI) and Orthorhombic anisotropy. We develop effective isotropic velocity and density models to package the effects of anisotropy such that the wave propagation behavior using these effective models approximate those of the original anisotropic model. We build these effective models through the high frequency asymptotic approximation based on the eikonal and transport equations. We match the geometrical behavior of the wave-fields, given by traveltimes, from the anisotropic and isotropic eikonal equations. This matching yields the effective isotropic velocity that approximates the kinematics of the anisotropic wavefield. Equivalently, we calculate the effective densities by equating the anisotropic and isotropic transport equations. The effective velocities and densities are then fed into the isotropic acoustic variable density wave equation to obtain cheaper anisotropic wavefields. We justify our approach by testing it on an elliptical anisotropic model. The numerical results demonstrate a good matching of both traveltime and amplitude between anisotropic and effective isotropic wavefields.

  9. Text Character Extraction Implementation from Captured Handwritten Image to Text Conversionusing Template Matching Technique

    Directory of Open Access Journals (Sweden)

    Barate Seema

    2016-01-01

    Full Text Available Images contain various types of useful information that should be extracted whenever required. A various algorithms and methods are proposed to extract text from the given image, and by using that user will be able to access the text from any image. Variations in text may occur because of differences in size, style,orientation, alignment of text, and low image contrast, composite backgrounds make the problem during extraction of text. If we develop an application that extracts and recognizes those texts accurately in real time, then it can be applied to many important applications like document analysis, vehicle license plate extraction, text- based image indexing, etc and many applications have become realities in recent years. To overcome the above problems we develop such application that will convert the image into text by using algorithms, such as bounding box, HSV model, blob analysis,template matching, template generation.

  10. Source-to-accelerator quadrupole matching section for a compact linear accelerator

    Science.gov (United States)

    Seidl, P. A.; Persaud, A.; Ghiorso, W.; Ji, Q.; Waldron, W. L.; Lal, A.; Vinayakumar, K. B.; Schenkel, T.

    2018-05-01

    Recently, we presented a new approach for a compact radio-frequency (RF) accelerator structure and demonstrated the functionality of the individual components: acceleration units and focusing elements. In this paper, we combine these units to form a working accelerator structure: a matching section between the ion source extraction grids and the RF-acceleration unit and electrostatic focusing quadrupoles between successive acceleration units. The matching section consists of six electrostatic quadrupoles (ESQs) fabricated using 3D-printing techniques. The matching section enables us to capture more beam current and to match the beam envelope to conditions for stable transport in an acceleration lattice. We present data from an integrated accelerator consisting of the source, matching section, and an ESQ doublet sandwiched between two RF-acceleration units.

  11. A quantitative method for measuring the quality of history matches

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, T.S. [Kerr-McGee Corp., Oklahoma City, OK (United States); Knapp, R.M. [Univ. of Oklahoma, Norman, OK (United States)

    1997-08-01

    History matching can be an efficient tool for reservoir characterization. A {open_quotes}good{close_quotes} history matching job can generate reliable reservoir parameters. However, reservoir engineers are often frustrated when they try to select a {open_quotes}better{close_quotes} match from a series of history matching runs. Without a quantitative measurement, it is always difficult to tell the difference between a {open_quotes}good{close_quotes} and a {open_quotes}better{close_quotes} matches. For this reason, we need a quantitative method for testing the quality of matches. This paper presents a method for such a purpose. The method uses three statistical indices to (1) test shape conformity, (2) examine bias errors, and (3) measure magnitude of deviation. The shape conformity test insures that the shape of a simulated curve matches that of a historical curve. Examining bias errors assures that model reservoir parameters have been calibrated to that of a real reservoir. Measuring the magnitude of deviation assures that the difference between the model and the real reservoir parameters is minimized. The method was first tested on a hypothetical model and then applied to published field studies. The results showed that the method can efficiently measure the quality of matches. It also showed that the method can serve as a diagnostic tool for calibrating reservoir parameters during history matching.

  12. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    Science.gov (United States)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  13. Automated side-chain model building and sequence assignment by template matching

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2002-01-01

    A method for automated macromolecular side-chain model building and for aligning the sequence to the map is described. An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer

  14. Detection, modeling and matching of pleural thickenings from CT data towards an early diagnosis of malignant pleural mesothelioma

    Science.gov (United States)

    Chaisaowong, Kraisorn; Kraus, Thomas

    2014-03-01

    Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.

  15. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    -registration, depth correction, etc.) each geophysical profile was evaluated by matching the core data. Applying traditional geophysical techniques, the best profiles were inverted using the core data creating two-dimensional (2-D) stratigraphic regolith models for each transect, and evaluated using independent validation. Next, in a test of an alternative method borrowed from digital soil mapping, the best preprocessed geophysical profiles were co-registered and stratigraphic models for each property created using multivariate environmental correlation. After independent validation, the qualities of the latest models were compared to the traditionally derived 2-D inverted models. Finally, the best overall stratigraphic models were used in conjunction with local environmental data (e.g. geology, geochemistry, terrain, soils) to create conceptual regolith hillslope models for each transect highlighting important features and processes, e.g. morphology, hydropedology and weathering characteristics. Results are presented with recommendations regarding the use of geophysics in modelling regolith stratigraphy at fine scales.

  16. A Phase Matching, Adiabatic Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Lemery, Francois [Hamburg U.; Flöttmann, Klaus [DESY; Kärtner, Franz [CFEL, Hamburg; Piot, Philippe [Northern Illinois U.

    2017-05-01

    Tabletop accelerators are a thing of the future. Reducing their size will require scaling down electromagnetic wavelengths; however, without correspondingly high field gradients, particles will be more susceptible to phase-slippage – especially at low energy. We investigate how an adiabatically-tapered dielectric-lined waveguide could maintain phase-matching between the accelerating mode and electron bunch. We benchmark our simple model with CST and implement it into ASTRA; finally we provide a first glimpse into the beam dynamics in a phase-matching accelerator.

  17. Covariant diagrams for one-loop matching

    International Nuclear Information System (INIS)

    Zhang, Zhengkang

    2016-10-01

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  18. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)

    2017-05-30

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  19. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-10-15

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  20. Covariant diagrams for one-loop matching

    International Nuclear Information System (INIS)

    Zhang, Zhengkang

    2017-01-01

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  1. Anatomy Ontology Matching Using Markov Logic Networks

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2016-01-01

    Full Text Available The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

  2. Scanner OPC signatures: automatic vendor-to-vendor OPE matching

    Science.gov (United States)

    Renwick, Stephen P.

    2009-03-01

    As 193nm lithography continues to be stretched and the k1 factor decreases, optical proximity correction (OPC) has become a vital part of the lithographer's tool kit. Unfortunately, as is now well known, the design variations of lithographic scanners from different vendors cause them to have slightly different optical-proximity effect (OPE) behavior, meaning that they print features through pitch in distinct ways. This in turn means that their response to OPC is not the same, and that an OPC solution designed for a scanner from Company 1 may or may not work properly on a scanner from Company 2. Since OPC is not inexpensive, that causes trouble for chipmakers using more than one brand of scanner. Clearly a scanner-matching procedure is needed to meet this challenge. Previously, automatic matching has only been reported for scanners of different tool generations from the same manufacturer. In contrast, scanners from different companies have been matched using expert tuning and adjustment techniques, frequently requiring laborious test exposures. Automatic matching between scanners from Company 1 and Company 2 has remained an unsettled problem. We have recently solved this problem and introduce a novel method to perform the automatic matching. The success in meeting this challenge required three enabling factors. First, we recognized the strongest drivers of OPE mismatch and are thereby able to reduce the information needed about a tool from another supplier to that information readily available from all modern scanners. Second, we developed a means of reliably identifying the scanners' optical signatures, minimizing dependence on process parameters that can cloud the issue. Third, we carefully employed standard statistical techniques, checking for robustness of the algorithms used and maximizing efficiency. The result is an automatic software system that can predict an OPC matching solution for scanners from different suppliers without requiring expert intervention.

  3. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  4. Optimization of technique factors for a silicon diode array full-field digital mammography system and comparison to screen-film mammography with matched average glandular dose

    International Nuclear Information System (INIS)

    Berns, Eric A.; Hendrick, R. Edward; Cutter, Gary R.

    2003-01-01

    Contrast-detail experiments were performed to optimize technique factors for the detection of low-contrast lesions using a silicon diode array full-field digital mammography (FFDM) system under the conditions of a matched average glandular dose (AGD) for different techniques. Optimization was performed for compressed breast thickness from 2 to 8 cm. FFDM results were compared to screen-film mammography (SFM) at each breast thickness. Four contrast-detail (CD) images were acquired on a SFM unit with optimal techniques at 2, 4, 6, and 8 cm breast thicknesses. The AGD for each breast thickness was calculated based on half-value layer (HVL) and entrance exposure measurements on the SFM unit. A computer algorithm was developed and used to determine FFDM beam current (mAs) that matched AGD between FFDM and SFM at each thickness, while varying target, filter, and peak kilovoltage (kVp) across the full range available for the FFDM unit. CD images were then acquired on FFDM for kVp values from 23-35 for a molybdenum-molybdenum (Mo-Mo), 23-40 for a molybdenum-rhodium (Mo-Rh), and 25-49 for a rhodium-rhodium (Rh-Rh) target-filter under the constraint of matching the AGD from screen-film for each breast thickness (2, 4, 6, and 8 cm). CD images were scored independently for SFM and each FFDM technique by six readers. CD scores were analyzed to assess trends as a function of target-filter and kVp and were compared to SFM at each breast thickness. For 2 cm thick breasts, optimal FFDM CD scores occurred at the lowest possible kVp setting for each target-filter, with significant decreases in FFDM CD scores as kVp was increased under the constraint of matched AGD. For 2 cm breasts, optimal FFDM CD scores were not significantly different from SFM CD scores. For 4-8 cm breasts, optimum FFDM CD scores were superior to SFM CD scores. For 4 cm breasts, FFDM CD scores decreased as kVp increased for each target-filter combination. For 6 cm breasts, CD scores decreased slightly as k

  5. 76 FR 5235 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA Internal Match)-Match Number 1014

    Science.gov (United States)

    2011-01-28

    ...; Computer Matching Program (SSA Internal Match)--Match Number 1014 AGENCY: Social Security Administration... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching....C. 552a, as amended, and the provisions of the Computer Matching and Privacy Protection Act of 1988...

  6. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    Science.gov (United States)

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  7. Matching the reaction-diffusion simulation to dynamic [18F]FMISO PET measurements in tumors: extension to a flow-limited oxygen-dependent model.

    Science.gov (United States)

    Shi, Kuangyu; Bayer, Christine; Gaertner, Florian C; Astner, Sabrina T; Wilkens, Jan J; Nüsslin, Fridtjof; Vaupel, Peter; Ziegler, Sibylle I

    2017-02-01

    Positron-emission tomography (PET) with hypoxia specific tracers provides a noninvasive method to assess the tumor oxygenation status. Reaction-diffusion models have advantages in revealing the quantitative relation between in vivo imaging and the tumor microenvironment. However, there is no quantitative comparison of the simulation results with the real PET measurements yet. The lack of experimental support hampers further applications of computational simulation models. This study aims to compare the simulation results with a preclinical [ 18 F]FMISO PET study and to optimize the reaction-diffusion model accordingly. Nude mice with xenografted human squamous cell carcinomas (CAL33) were investigated with a 2 h dynamic [ 18 F]FMISO PET followed by immunofluorescence staining using the hypoxia marker pimonidazole and the endothelium marker CD 31. A large data pool of tumor time-activity curves (TAC) was simulated for each mouse by feeding the arterial input function (AIF) extracted from experiments into the model with different configurations of the tumor microenvironment. A measured TAC was considered to match a simulated TAC when the difference metric was below a certain, noise-dependent threshold. As an extension to the well-established Kelly model, a flow-limited oxygen-dependent (FLOD) model was developed to improve the matching between measurements and simulations. The matching rate between the simulated TACs of the Kelly model and the mouse PET data ranged from 0 to 28.1% (on average 9.8%). By modifying the Kelly model to an FLOD model, the matching rate between the simulation and the PET measurements could be improved to 41.2-84.8% (on average 64.4%). Using a simulation data pool and a matching strategy, we were able to compare the simulated temporal course of dynamic PET with in vivo measurements. By modifying the Kelly model to a FLOD model, the computational simulation was able to approach the dynamic [ 18 F]FMISO measurements in the investigated

  8. Technical performance and match-to-match variation in elite football teams.

    Science.gov (United States)

    Liu, Hongyou; Gómez, Miguel-Angel; Gonçalves, Bruno; Sampaio, Jaime

    2016-01-01

    Recent research suggests that match-to-match variation adds important information to performance descriptors in team sports, as it helps measure how players fine-tune their tactical behaviours and technical actions to the extreme dynamical environments. The current study aims to identify the differences in technical performance of players from strong and weak teams and to explore match-to-match variation of players' technical match performance. Performance data of all the 380 matches of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Twenty-one performance-related match actions and events were chosen as variables in the analyses. Players' technical performance profiles were established by unifying count values of each action or event of each player per match into the same scale. Means of these count values of players from Top3 and Bottom3 teams were compared and plotted into radar charts. Coefficient of variation of each match action or event within a player was calculated to represent his match-to-match variation of technical performance. Differences in the variation of technical performances of players across different match contexts (team and opposition strength, match outcome and match location) were compared. All the comparisons were achieved by the magnitude-based inferences. Results showed that technical performances differed between players of strong and weak teams from different perspectives across different field positions. Furthermore, the variation of the players' technical performance is affected by the match context, with effects from team and opposition strength greater than effects from match location and match outcome.

  9. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  10. Oncoplastic round block technique has comparable operative parameters as standard wide local excision: a matched case-control study.

    Science.gov (United States)

    Lim, Geok-Hoon; Allen, John Carson; Ng, Ruey Pyng

    2017-08-01

    Although oncoplastic breast surgery is used to resect larger tumors with lower re-excision rates compared to standard wide local excision (sWLE), criticisms of oncoplastic surgery include a longer-albeit, well concealed-scar, longer operating time and hospital stay, and increased risk of complications. Round block technique has been reported to be very suitable for patients with relatively smaller breasts and minimal ptosis. We aim to determine if round block technique will result in operative parameters comparable with sWLE. Breast cancer patients who underwent a round block procedure from 1st May 2014 to 31st January 2016 were included in the study. These patients were then matched for the type of axillary procedure, on a one to one basis, with breast cancer patients who had undergone sWLE from 1st August 2011 to 31st January 2016. The operative parameters between the 2 groups were compared. 22 patients were included in the study. Patient demographics and histologic parameters were similar in the 2 groups. No complications were reported in either group. The mean operating time was 122 and 114 minutes in the round block and sWLE groups, respectively (P=0.64). Length of stay was similar in the 2 groups (P=0.11). Round block patients had better cosmesis and lower re-excision rates. A higher rate of recurrence was observed in the sWLE group. The round block technique has comparable operative parameters to sWLE with no evidence of increased complications. Lower re-excision rate and better cosmesis were observed in the round block patients suggesting that the round block technique is not only comparable in general, but may have advantages to sWLE in selected cases.

  11. Optimal Quality Strategy and Matching Service on Crowdfunding Platforms

    Directory of Open Access Journals (Sweden)

    Wenqing Wu

    2018-04-01

    Full Text Available This paper develops a crowdfunding platform model incorporating quality and a matching service from the perspective of a two-sided market. It aims to explore the impact of different factors on the optimal quality threshold and matching service in a context of crowdfunding from the perspective of a two-sided market. We discuss the impact of different factors on the optimal quality threshold and matching service. Two important influential factors are under consideration, simultaneously. One is the quality threshold of admission and the other is the matching efficiency on crowdfunding platforms. This paper develops a two-sided market model incorporating quality, a matching service, and the characters of crowdfunding campaigns. After attempting to solve the model by derivative method, this paper identifies the mechanism of how the parameters influence the optimal quality threshold and matching service. Additionally, it compares the platform profits in scenarios with and without an exclusion policy. The results demonstrate that excluding low-quality projects is profitable when funder preference for project quality is substantial enough. Crowdfunding platform managers would be unwise to admit the quality threshold of the crowdfunding project and charge entrance fees when the parameter of funder preference for project quality is small.

  12. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    Science.gov (United States)

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. A Novel Artificial Bee Colony Algorithm Based on Internal-Feedback Strategy for Image Template Matching

    Directory of Open Access Journals (Sweden)

    Bai Li

    2014-01-01

    Full Text Available Image template matching refers to the technique of locating a given reference image over a source image such that they are the most similar. It is a fundamental mission in the field of visual target recognition. In general, there are two critical aspects of a template matching scheme. One is similarity measurement and the other is best-match location search. In this work, we choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC algorithm. IF-ABC algorithm is highlighted by its effort to fight against premature convergence. This purpose is achieved through discarding the conventional roulette selection procedure in the ABC algorithm so as to provide each employed bee an equal chance to be followed by the onlooker bees in the local search phase. Besides that, we also suggest efficiently utilizing the internal convergence states as feedback guidance for searching intensity in the subsequent cycles of iteration. We have investigated four ideal template matching cases as well as four actual cases using different searching algorithms. Our simulation results show that the IF-ABC algorithm is more effective and robust for this template matching mission than the conventional ABC and two state-of-the-art modified ABC algorithms do.

  14. Privacy-Preserving Matching of Spatial Datasets with Protection against Background Knowledge

    DEFF Research Database (Denmark)

    Ghinita, Gabriel; Vicente, Carmen Ruiz; Shang, Ning

    2010-01-01

    should be disclosed. Previous research efforts focused on private matching for relational data, and rely either on spaceembedding or on SMC techniques. Space-embedding transforms data points to hide their exact attribute values before matching is performed, whereas SMC protocols simulate complex digital...... circuits that evaluate the matching condition without revealing anything else other than the matching outcome. However, existing solutions have at least one of the following drawbacks: (i) they fail to protect against adversaries with background knowledge on data distribution, (ii) they compromise privacy...... by returning large amounts of false positives and (iii) they rely on complex and expensive SMC protocols. In this paper, we introduce a novel geometric transformation to perform private matching on spatial datasets. Our method is efficient and it is not vulnerable to background knowledge attacks. We consider...

  15. Study on electrical impedance matching for broadband ultrasonic transducer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Geon Woo [University of Science and Technology, Daejeon (Korea, Republic of); Kim, Ki Bok [Korea Research Institute of Standards and Science, Center for Safety Measurement, Daejeon (Korea, Republic of); Baek, Kwang Sae [Elache Co., Busan (Korea, Republic of)

    2017-02-15

    Ultrasonic transducers with high resolution and resonant frequency are required to detect small defects (less than hundreds of μm) by ultrasonic testing. The resonance frequency and resolution of an ultrasonic transducer are closely related to the thickness of piezo-electric materials, backing materials, and the electric impedance matching technique. Among these factors, electrical impedance matching plays an important role because it can reduce the loss and reflection of ultrasonic energy differences in electrical impedance between an ultrasonic transducer and an ultrasonic defects detecting system. An LC matching circuit is the most frequently used electric matching method. It is necessary for the electrical impedance of an ultrasonic transducer to correspond to approximately 50 Ω to compensate the difference in electrical impedance between both connections. In this study, a 15 MHz immersion ultrasonic transducer was fabricated and an LC electrical impedance circuit was applied to that for having broad-band frequency characteristic.

  16. Kappa statistic for clustered matched-pair data.

    Science.gov (United States)

    Yang, Zhao; Zhou, Ming

    2014-07-10

    Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.

  17. State of otolaryngology match: has competition increased since the "early" match?

    Science.gov (United States)

    Cabrera-Muffly, Cristina; Sheeder, Jeanelle; Abaza, Mona

    2015-05-01

    To examine fluctuations in supply and demand of otolaryngology residency positions after the shift from an "early match" coordinated by the San Francisco match to a "conventional" matching process through the National Residency Matching Program (NRMP). To determine whether competition among otolaryngology residency positions have changed during this time frame. Database analysis. Matching statistics from 1998 to 2013 were obtained for all first-year residency positions through the NRMP. Matching statistics from 1998 to 2005 were obtained for otolaryngology residency positions through the San Francisco match. Univariate analysis was performed, with a P value less than .05 determined as significant. The number of otolaryngology positions and applicants remained proportional to the overall number of positions and applicants in the NRMP match. Otolaryngology applicants per position and the matching rate of all applicants did not change between the 2 time periods studied. The overall match rate of US seniors applying to otolaryngology did not change, while the match rate of non-US seniors decreased significantly following initiation of the conventional match. There was no significant change in United States Medical Licensing Exam step 1 scores or percentage of unfilled otolaryngology residency positions between the 2 time periods. When comparing the early versus conventional otolaryngology match time periods, the only major change was the decreased percentage of matching among non-US senior applicants. Despite a significant shift in match timing after 2006, the supply, demand, and competitiveness of otolaryngology residency positions have not changed significantly. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  18. Introducing passive matched field acoustic tomography

    International Nuclear Information System (INIS)

    Gasparini, O.; Camporeale, C.; Crise, A.

    1997-01-01

    In acoustic tomography sea-basin environmental parameters such as temperature profiles and current-velocities are derived, when ray propagation models are adopted, by the travel time estimates relative to the identifiable ray paths. The transmitted signals are either single frequency, or impulsive, or intermittent and deterministic. When the wavelength is comparable with the scale lengths present in the propagation scenario, Matched Field Tomography (MFT) is used, entailing the consideration of waveguide modes instead of rays. A new concept in tomography is introduced in the paper, that employs passively the noise emitted by ships of opportunity (cargoes, ferries) as source signals. The passive technique is acoustic-pollution-free, and if a basin is selected in which a regular ship traffic occurs data can be received on a regular schedule, with no transmission cost. A novel array pre-processor for passive tomography is introduced, such that the signal structure at the pre-processor output in nearly the same as that obtainable in the case of single-frequency source signals

  19. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  20. Cross matching of blood in carcharhiniform, lamniform, and orectolobiform sharks.

    Science.gov (United States)

    Hadfield, Catherine A; Haines, Ashley N; Clayton, Leigh A; Whitaker, Brent R

    2010-09-01

    The transfusion of whole blood in elasmobranchs could provide cardiovascular support following hemorrhage. Since donor and recipient compatibility is not known, a technique was established to allow cross matching of red blood cells and serum in sharks. Cross matching was carried out among 19 individuals from seven species: the nurse shark (Ginglymostoma cirratum), sandbar shark (Carcharhinus plumbeus), sandtiger shark (Carcharias taurus), white-spotted bamboo shark (Chiloscyllium plagiosum), brown-banded bamboo shark (Chiloscyllium punctatum), zebra shark (Stegostoma fasciatum), and spotted wobbegong (Orectolobus maculatus). Negative cross-matches showed no agglutination or hemolysis, suggesting that donor and recipient would be compatible. Cross-matches between conspecifics were all negative (sandbar, sandtiger, nurse, and white-spotted bamboo sharks). All cross-matches between sandbar and sandtiger sharks were also negative. Positive crossmatches consisted of agglutination or hemolysis of red blood cells, suggesting that the donor and recipient would be incompatible. Strong positive reactions occurred, for example, with red blood cells from sandtiger and sandbar sharks and serum from nurse sharks. Cross matching should be carried out in elasmobranchs prior to any blood transfusion.

  1. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  2. A simulated rugby match protocol induces physiological fatigue ...

    African Journals Online (AJOL)

    Background: A rugby union game consists of 80 minutes of strenuous exertion. Forwards are required to participate in the arduous activity of scrummaging throughout a game. Objectives: The purpose of this study was to identify whether rugby-match simulated fatigue modified individual scrummaging technique and ...

  3. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  4. A Novel Technique for Steganography Method Based on Improved Genetic Algorithm Optimization in Spatial Domain

    Directory of Open Access Journals (Sweden)

    M. Soleimanpour-moghadam

    2013-06-01

    Full Text Available This paper devotes itself to the study of secret message delivery using cover image and introduces a novel steganographic technique based on genetic algorithm to find a near-optimum structure for the pair-wise least-significant-bit (LSB matching scheme. A survey of the related literatures shows that the LSB matching method developed by Mielikainen, employs a binary function to reduce the number of changes of LSB values. This method verifiably reduces the probability of detection and also improves the visual quality of stego images. So, our proposal draws on the Mielikainen's technique to present an enhanced dual-state scoring model, structured upon genetic algorithm which assesses the performance of different orders for LSB matching and searches for a near-optimum solution among all the permutation orders. Experimental results confirm superiority of the new approach compared to the Mielikainen’s pair-wise LSB matching scheme.

  5. Extension of instance search technique by geometric coding and quantization error compensation

    OpenAIRE

    García Del Molino, Ana

    2013-01-01

    [ANGLÈS] This PFC analyzes two ways of improving the video retrieval techniques for instance search problem. In one hand, "Pairing Interest Points for a better Signature using Sparse Detector's Spatial Information", allows the Bag-of-Words model to keep some spatial information. In the other, "Study of the Hamming Embedding Signature Symmetry in Video Retrieval" provides binary signatures that refine the matching based on visual words, and aims to find the best way of matching taking into acc...

  6. Impedance-match experiments using high intensity lasers

    International Nuclear Information System (INIS)

    Holmes, N.C.; Trainor, R.J.; Anderson, R.A.; Veeser, L.R.; Reeves, G.A.

    1981-01-01

    The results of a series of impedance-match experiments using copper-aluminum targets irradiated using the Janus Laser Facility are discussed. The results are compared to extrapolations of data obtained at lower pressures using impact techniques. The sources of errors are described and evaluated. The potential of lasers for high accuracy equation of state investigations are discussed

  7. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  8. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  9. How convincing is a matching Y-chromosome profile?

    DEFF Research Database (Denmark)

    Andersen, Mikkel Meyer; Balding, David J.

    2017-01-01

    , yet the number of matching relatives is fixed as population size varies, it is typically infeasible to derive population-based match probabilities relevant to a specific crime. We propose a conceptually simple solution, based on a simulation model and software to approximate the distribution...

  10. Comparison of a commercial blood cross-matching kit to the standard laboratory method for establishing blood transfusion compatibility in dogs.

    Science.gov (United States)

    Guzman, Leo Roa; Streeter, Elizabeth; Malandra, Allison

    2016-01-01

    To evaluate the accuracy of a commercial blood transfusion cross-match kit when compared to the standard laboratory method for establishing blood transfusion compatibility. A prospective observational in intro study performed from July 2009 to July 2013. Private referral veterinary center. Ten healthy dogs, 11 anemic dogs, and 24 previously transfused dogs. None. Forty-five dogs were enrolled in a prospective study in order to compare the standard blood transfusion cross-match technique to a commercial blood transfusion cross-matching kit. These dogs were divided into 3 different groups that included 10 healthy dogs (control group), 11 anemic dogs in need of a blood transfusion, and 24 sick dogs that were previously transfused. Thirty-five dogs diagnosed with anemia secondary to multiple disease processes were cross-matched using both techniques. All dogs cross-matched via the kit had a compatible major and minor result, whereas 16 dogs out of 45 (35%) had an incompatible cross-match result when the standard laboratory technique was performed. The average time to perform the commercial kit was 15 minutes and this was 3 times shorter than the manual cross-match laboratory technique that averaged 45-50 minutes to complete. While the gel-based cross-match kit is quicker and less technically demanding than standard laboratory cross-match procedures, microagglutination and low-grade hemolysis are difficult to identify by using the gel-based kits. This could result in transfusion reactions if the gel-based kits are used as the sole determinant of blood compatibility prior to transfusion. Based on our results, the standard manual cross-match technique remains the gold standard test to determine blood transfusion compatibility. © Veterinary Emergency and Critical Care Society 2016.

  11. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  12. Job Searchers, Job Matches and the Elasticity of Matching

    NARCIS (Netherlands)

    Broersma, L.; van Ours, J.C.

    1998-01-01

    This paper stresses the importance of a specification of the matching function in which the measure of job matches corresponds to the measure of job searchers. In many empirical studies on the matching function this requirement has not been fulfilled because it is difficult to find information about

  13. CRISP. Market-oriented online supply-demand matching

    International Nuclear Information System (INIS)

    Kamphuis, I.G.; Kester, J.C.P.; Carlsson, P; Akkermans, H.

    2004-04-01

    Current power distribution systems are operated in a top-down manner. Power production control and price formation take place on a central level on the basis of relatively static data from a data collection and dispatching network with a limited scope and granularity. When incorporating a more considerable fraction of small-scale producers on the basis of, for instance, renewable energy, operation of the distribution grid requires more data to be collected from a more extensive information and data communication network. Furthermore, increased local flows, in the form of two-way communication with distributed computation techniques, enable a more dynamic adaptation in power supply and demand patterns paving the way to a flexible way of embedding of ill-predictable supply of some types of renewable energy sources. DSM-programs have been in use in the utility sector for years now. In this document, first, current Demand Side Management (DSM) and Demand Response Resource (DRR) techniques are discussed; then, supply side management especially in a DG (Distributed Generation) context is treated. A framework of novel concepts and possible technology directions is presented subsequently and some preliminary scenarios are shown to illustrate these concepts. An overview of more flexible supply and demand matching schemes is given essentially based on four distinct types of SDM clusters. It appears, that it is possible to fulfil requirements for these distributed environments in terms of needed information and communication technology, ICT, if these are paralleled with the expected future penetration of ever-smaller scale data-exchange networks at power customer sites. Agent technology using algorithms from micro-economic market theory offers a promising possibility for managing the complexity of price formation and supply demand matching in these fine-grained bottom-up control distribution networks. Implication of these technical developments in terms of market and business

  14. Multi-image Matching of Airborne SAR Imagery by SANCC

    Directory of Open Access Journals (Sweden)

    DING Hao

    2015-03-01

    Full Text Available In order to improve accuracy of SAR matching, a multi-image matching method based on sum of adaptive normalized cross-correlation (SANCC is proposed. It utilizes geometrical and radiometric information of multi-baselinesynthetic aperture radar (SARimages effectively. Firstly, imaging parameters, platform parameters and approximate digital surface model (DSM are used to predict matching line. Secondly, similarity and proximity in Gestalt theory are introduced to SANCC, and SANCC measures of potential matching points along the matching line are calculated. Thirdly, multi-image matching results and object coordinates of matching points are obtained by winner-take-all (WTA optimization strategy. The approach has been demonstrated with airborne SAR images acquired by a Chinese airborne SAR system (CASMSAR system. The experimental results indicate that the proposed algorithm is effective for providing dense and accuracy matching points, reducing the number of mismatches caused by repeated textures, and offering a better solution to match in poor textured areas.

  15. A Novel Real-Time Reference Key Frame Scan Matching Method

    Directory of Open Access Journals (Sweden)

    Haytham Mohamed

    2017-05-01

    Full Text Available Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF. RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  16. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Science.gov (United States)

    2013-12-05

    ... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS... Privacy Act of 1974 (5 U.S.C. 552a), as amended, this notice announces the renewal of a CMP that CMS plans...

  17. An expert system for automated flavour matching - Prioritizer

    DEFF Research Database (Denmark)

    Silva, Bárbara Santos; Tøstesen, Marie; Petersen, Mikael Agerlin

    2017-01-01

    Flavour matching can be viewed as trying to reproduce a specific flavour. This is a time consuming task and may lead to flavour mixtures that are too complex or too expensive to be commercialized. In order to facilitate the matching, we have developed a new mathematical model, called Prioritizer....

  18. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  19. One-loop effective lagrangians after matching

    Energy Technology Data Exchange (ETDEWEB)

    Aguila, F. del; Santiago, J. [Universidad de Granada, Departamento de Fisica Teorica y del Cosmos and CAFPE, Granada (Spain); Kunszt, Z. [ETH Zuerich, Institute for Theoretical Physics, Zuerich (Switzerland)

    2016-05-15

    We discuss the limitations of the covariant derivative expansion prescription advocated to compute the one-loop Standard Model (SM) effective lagrangian when the heavy fields couple linearly to the SM. In particular, one-loop contributions resulting from the exchange of both heavy and light fields must be explicitly taken into account through matching because the proposed functional approach alone does not account for them. We review a simple case with a heavy scalar singlet of charge -1 to illustrate the argument. As two other examples where this matching is needed and this functional method gives a vanishing result, up to renormalization of the heavy sector parameters, we re-evaluate the one-loop corrections to the T-parameter due to a heavy scalar triplet with vanishing hypercharge coupling to the Brout-Englert-Higgs boson and to a heavy vector-like quark singlet of charged 2/3 mixing with the top quark, respectively. In all cases we make use of a new code for matching fundamental and effective theories in models with arbitrary heavy field additions. (orig.)

  20. Meningkatkan Aktivitas Belajar Siswa dengan Menggunakan Model Make A Match Pada Mata Pelajaran Matematika di Kelas V SDN 050687 Sawit Seberang

    Directory of Open Access Journals (Sweden)

    Daitin Tarigan

    2014-06-01

    Full Text Available AbstrakPenelitian ini bertujuan untuk mengetahui aktivitas belajar siswa pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal dan sebaliknya dengan menggunakan model make a match di kelas V SD Negeri 050687 Sawit Seberang T.A 2013/2014. Jenis penelitian ini adalah Penelitian Tindakan Kelas (PTK dengan alat pengumpulan data yang digunakan adalah lembar observasi aktivitas guru dan siswa. Berdasarkan analisis data diperoleh hasil pada siklus I Pertemuan I skor aktivitas guru adalah 82,14 dengan kriteria baik dan aktivitas belajar dalah aktif. Tindakan dilanjutkan sampai dengan siklus ke II. Pada pertemuan II siklus II skor aktivitas guru adalah 96,42 dengan kriteria sangat baik dan aktivitas belajar klasikal adalah sangat aktif. Dari hasil tersebut dapat diambil kesimpulan bahwa tindakan penelitian berhasil karena nilai indikator aktivitas belajar siswa dan jumlah siswa yang dinyatakan aktif secara klasikal telah mencapai 80%. Dengan demikian maka penggunaan model make a match dapat meningkatkan aktivitas belajar siswa di kelas V SD Negeri 050687 Sawit Seberang pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal. Kata Kunci:      Model Make a Match; Aktivitas Belajar Siswa  AbstractThis reseach aim is to know the student activity on Math at topic change the fraction into percent, desimal and vice versa, using make a match model on fifth grade of SDN 050687 Sawit Seberang 2013/2014. This is a classroom action research which is used activity observrvation sheet as its instrumen of collecting data. From the analisys of data, it is got result as follows: on cycle I meet I, teacher activity score is 82,14, which was mean good, and learning activity was active. The action and then continued until second cycle. On the meet II cylce II, it was got teacher activity score is 96,42, which was mean very good, and clasical learning activity was very active. Based on the result, it was conclude

  1. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  2. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  3. Stability Analysis of Positive Polynomial Fuzzy-Model-Based Control Systems with Time Delay under Imperfect Premise Matching

    OpenAIRE

    Li, Xiaomiao; Lam, Hak Keung; Song, Ge; Liu, Fucai

    2017-01-01

    This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based ({PFMB}) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. A sum-of-squares (SOS)-based stability ana...

  4. Automatic enhancement of skin fluorescence localization due to refractive index matching

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2004-07-01

    Fluorescence diagnostic techniques are notable amongst many other optical methods, as they offer high sensitivity and non-invasive measurements of tissue properties. However, a combination of multiple scattering and physical heterogeneity of biological tissues hampers the interpretation of the fluorescence measurements. The analyses of the spatial distribution of endogenous and exogenous fluorophores excitations within tissues and their contribution to the detected signal localization are essential for many applications. We have developed a novel Monte Carlo technique that gives a graphical perception of how the excitation and fluorescence detected signal are localized in tissues. Our model takes into account spatial distribution of fluorophores and their quantum yields. We demonstrate that matching of the refractive indices of ambient medium and topical skin layer improves spatial localization of the detected fluorescence signal within the tissue. This result is consistent with the recent conclusion that administering biocompatible agents results in higher image contrast.

  5. Transanal pullthrough for Hirschsprung disease: matched case-control comparison of Soave and Swenson techniques.

    Science.gov (United States)

    Nasr, Ahmed; Haricharan, Ramanath N; Gamarnik, Julie; Langer, Jacob C

    2014-05-01

    Both the Swenson and the Soave procedures have been adapted to a transanal approach. The purpose of this study was to compare outcomes following the transanal Swenson and Soave procedures using a matched case control analysis. A retrospective chart review was performed to identify all transanal Soave and Swenson pullthroughs done at 2 tertiary care children's hospitals between 2000 and 2010. Patients were matched for gestational age, mean weight at time of the operation, level of aganglionosis, and presence of co-morbidities. Student's t-test and chi-squared analysis were performed. Fifty-four patients (Soave 27, Swenson 27) had adequate data for matching and analysis. Mean follow-up was 4±1.6 years and 3.2 ±2.7 years for the Soave and Swenson groups, respectively. No significant differences in mean operating time (Soave:191±55, Swenson:167±61 min, p=0.6), overall hospital stay (6±4 vs 7.8±5 days, p=0.7), and number with intra-operative complications (3 vs 4, p=1.0), post-operative obstructive symptoms (6 vs 9, p=0.5), enterocolitis episodes (4 vs 4, p=1.0), or fecal incontinence (0 vs 2, p=0.4) were noted. After controlling for potential confounders, there were no significant differences in the short and intermediate term outcome between transanal Soave and transanal Swenson pullthrough procedures. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  7. Statistical primer: propensity score matching and its alternatives.

    Science.gov (United States)

    Benedetto, Umberto; Head, Stuart J; Angelini, Gianni D; Blackstone, Eugene H

    2018-06-01

    Propensity score (PS) methods offer certain advantages over more traditional regression methods to control for confounding by indication in observational studies. Although multivariable regression models adjust for confounders by modelling the relationship between covariates and outcome, the PS methods estimate the treatment effect by modelling the relationship between confounders and treatment assignment. Therefore, methods based on the PS are not limited by the number of events, and their use may be warranted when the number of confounders is large, or the number of outcomes is small. The PS is the probability for a subject to receive a treatment conditional on a set of baseline characteristics (confounders). The PS is commonly estimated using logistic regression, and it is used to match patients with similar distribution of confounders so that difference in outcomes gives unbiased estimate of treatment effect. This review summarizes basic concepts of the PS matching and provides guidance in implementing matching and other methods based on the PS, such as stratification, weighting and covariate adjustment.

  8. Automated main-chain model building by template matching and iterative fragment extension

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition

  9. Surface registration technique for close-range mapping applications

    Science.gov (United States)

    Habib, Ayman F.; Cheng, Rita W. T.

    2006-08-01

    Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.

  10. University Reactor Matching Grants Program

    International Nuclear Information System (INIS)

    John Valentine; Farzad Rahnema; Said Abdel-Khalik

    2003-01-01

    During the 2002 Fiscal year, funds from the DOE matching grant program, along with matching funds from the industrial sponsors, have been used to support research in the area of thermal-hydraulics. Both experimental and numerical research projects have been performed. Experimental research focused on two areas: (1) Identification of the root cause mechanism for axial offset anomaly in pressurized water reactors under prototypical reactor conditions, and (2) Fluid dynamic aspects of thin liquid film protection schemes for inertial fusion reactor chambers. Numerical research focused on two areas: (1) Multi-fluid modeling of both two-phase and two-component flows for steam conditioning and mist cooling applications, and (2) Modeling of bounded Rayleigh-Taylor instability with interfacial mass transfer and fluid injection through a porous wall simulating the ''wetted wall'' protection scheme in inertial fusion reactor chambers. Details of activities in these areas are given

  11. Alternative Payment Models Should Risk-Adjust for Conversion Total Hip Arthroplasty: A Propensity Score-Matched Study.

    Science.gov (United States)

    McLawhorn, Alexander S; Schairer, William W; Schwarzkopf, Ran; Halsey, David A; Iorio, Richard; Padgett, Douglas E

    2017-12-06

    For Medicare beneficiaries, hospital reimbursement for nonrevision hip arthroplasty is anchored to either diagnosis-related group code 469 or 470. Under alternative payment models, reimbursement for care episodes is not further risk-adjusted. This study's purpose was to compare outcomes of primary total hip arthroplasty (THA) vs conversion THA to explore the rationale for risk adjustment for conversion procedures. All primary and conversion THAs from 2007 to 2014, excluding acute hip fractures and cancer patients, were identified in the National Surgical Quality Improvement Program database. Conversion and primary THA patients were matched 1:1 using propensity scores, based on preoperative covariates. Multivariable logistic regressions evaluated associations between conversion THA and 30-day outcomes. A total of 2018 conversions were matched to 2018 primaries. There were no differences in preoperative covariates. Conversions had longer operative times (148 vs 95 minutes, P reimbursement models shift toward bundled payment paradigms, conversion THA appears to be a procedure for which risk adjustment is appropriate. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Automated main-chain model building by template matching and iterative fragment extension.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  13. A new template matching method based on contour information

    Science.gov (United States)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process

  14. Design and development of PLC based offline impedance matching system for ICRH experiment

    International Nuclear Information System (INIS)

    Joshi, Ramesh; Jadav, H.M.; Mali, Aniruddh; Kulkarni, S.V.

    2015-01-01

    Ion Cyclotron Resonance Heating (ICRH) transmission line has two impedance matching networks, one for offline matching which has been employed before experimental shot. Another is online impedance matching which has been employed during experimental shot. Offline matching network consists of two static stubs, coarse tuner and coarse phase shifter identical in both transmission lines. There are motorized arrangement installed in each stubs and phase shifters. Both stubs are being used to vary transmission line length. Phase shifter is used to match the frequency of generated RF power. Programmable Logic Controller (PLC) based automation and control technique has been designed and developed for the system. Offline matching should be operated below 1 kHz frequency in order to move stepper motors. Program generates required square pulses which employed to motor controller to move either in upward or downward direction. In existing system this operation has been carried out using VME. To reduce the load on VME, PLC based system has been designed and integrated with main DAC system. WinCC software has been used (as SCADA/HMI) to develop front end GUI which communicates with OPC server. Further, OPC communicates with PLC for control of motorized arrangement. This paper describes technical details,design and development of PLC based offline matching system using WinCC as user interface. The communication between WinCC application and hardware devices was realized by OPC technique. The developed system has friendly graphical user interface, high-level automation and comprehensive function such as experimental process control. The system was proved to be reliable and accurate in practical application. (author)

  15. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  16. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  17. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  18. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  19. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Guoyan [Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014 Bern (Switzerland)

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction. The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark

  20. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  1. Competitive debate classroom as a cooperative learning technique for the human resources subject

    Directory of Open Access Journals (Sweden)

    Guillermo A. SANCHEZ PRIETO

    2018-01-01

    Full Text Available The paper shows an academic debate model as a cooperative learning technique for teaching human resources at University. The general objective of this paper is to conclude if academic debate can be included in the category of cooperative learning. The Specific objective it is presenting a model to implement this technique. Thus the first part of the paper shows the concept of cooperative learning and its main characteristics. The second part presents the debate model believed to be labelled as cooperative learning. Last part concludes with the characteristics of the model that match different aspects or not of the cooperative learning.

  2. Quick probabilistic binary image matching: changing the rules of the game

    Science.gov (United States)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  3. Optimally matching support and perceived spousal sensitivity.

    Science.gov (United States)

    Cutrona, Carolyn E; Shaffer, Philip A; Wesner, Kristin A; Gardner, Kelli A

    2007-12-01

    Partner sensitivity is an important antecedent of both intimacy (H. T. Reis & P. Shaver, 1988) and attachment (M. D. S. Ainsworth, 1989). On the basis of the optimal matching model of social support (C. E. Cutrona & D. Russell, 1990), support behaviors that "matched" the support goals of the stressed individual were predicted to lead to the perception of partner sensitivity. Predictions were tested with 59 married couples, who engaged in a videotaped self-disclosure task. Matching support was defined as the disclosure of emotions followed by emotional support or a request for information followed by informational support. Partial evidence was found for the predictions. Matching support following the disclosure of emotions was predictive of perceived partner sensitivity. Mismatched support following the disclosure of emotions predicted lower marital satisfaction, through the mediation of partner sensitivity. Matching support following a request for information was not predictive of perceived partner sensitivity, but negative partner responses (e.g., criticism or sarcasm) following a request for information negatively predicted perceptions of partner sensitivity. The importance of considering the context of support transactions is discussed.

  4. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  5. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  6. LINE-BASED MULTI-IMAGE MATCHING FOR FAÇADE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    T. A. Teo

    2012-07-01

    Full Text Available This research integrates existing LOD 2 building models and multiple close-range images for façade structural lines extraction. The major works are orientation determination and multiple image matching. In the orientation determination, Speeded Up Robust Features (SURF is applied to extract tie points automatically. Then, tie points and control points are combined for block adjustment. An object-based multi-images matching is proposed to extract the façade structural lines. The 2D lines in image space are extracted by Canny operator followed by Hough transform. The role of LOD 2 building models is to correct the tilt displacement of image from different views. The wall of LOD 2 model is also used to generate hypothesis planes for similarity measurement. Finally, average normalized cross correlation is calculated to obtain the best location in object space. The test images are acquired by a nonmetric camera Nikon D2X. The total number of image is 33. The experimental results indicate that the accuracy of orientation determination is about 1 pixel from 2515 tie points and 4 control points. It also indicates that line-based matching is more flexible than point-based matching.

  7. Magnetic Quasi-Phase Matching All-Fiber Isolator

    Directory of Open Access Journals (Sweden)

    Chunte A. Lu

    2010-01-01

    Full Text Available We have experimentally demonstrated an all-fiber optical isolator with 20 dB isolation. The result shows that the quasi-phase matching technique via a meter-long magnet array is highly feasible to generate more than 45 degrees of Faraday rotation in the fibers. The all-fiber isolator can also be temperature tuned to operate between 1048 nm and 1066 nm wavelength.

  8. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  9. Matching IMRT fields with static photon field in the treatment of head-and-neck cancer

    International Nuclear Information System (INIS)

    Li, Jonathan G.; Liu, Chihray; Kim, Siyong; Amdur, Robert J.; Palta, Jatinder R.

    2005-01-01

    Radiation treatment with intensity-modulated radiation therapy (IMRT) for head-and-neck cancer usually involves treating the superior aspects of the target volume with intensity-modulated (IM) fields, and the inferior portion of the target volume (the low neck nodes) with a static anterior-posterior field (commonly known as the low anterior neck, or LAN field). A match line between the IM and the LAN fields is created with possibly large dose inhomogeneities, which are clinically undesirable. We propose a practical method to properly match these fields with minimal dependence on patient setup errors. The method requires mono-isocentric setup of the IM and LAN fields with half-beam blocks as defined by the asymmetric jaws. The inferior jaws of the IM fields, which extend ∼1 cm inferiorly past the isocenter, are changed manually before patient treatment, so that they match the superior jaw of the LAN field at the isocenter. The matching of these fields therefore does not depend on the particular treatment plan of IMRT and depends only on the matching of the asymmetric jaws. Measurements in solid water phantom were performed to verify the field-matching technique. Dose inhomogeneities of less than 5% were obtained in the match-line region. Feathering of the match line is done twice during the course of a treatment by changing the matching jaw positions superiorly at 3-mm increments each time, which further reduces the dose inhomogeneity. Compared to the method of including the lower neck nodes in the IMRT fields, the field-matching technique increases the delivery efficiency and significantly reduces the total treatment time

  10. Engineered Quasi-Phase Matching for Nonlinear Quantum Optics in Waveguides

    Science.gov (United States)

    Van Camp, Mackenzie A.

    Entanglement is the hallmark of quantum mechanics. Quantum entanglement--putting two or more identical particles into a non-factorable state--has been leveraged for applications ranging from quantum computation and encryption to high-precision metrology. Entanglement is a practical engineering resource and a tool for sidestepping certain limitations of classical measurement and communication. Engineered nonlinear optical waveguides are an enabling technology for generating entangled photon pairs and manipulating the state of single photons. This dissertation reports on: i) frequency conversion of single photons from the mid-infrared to 843nm as a tool for incorporating quantum memories in quantum networks, ii) the design, fabrication, and test of a prototype broadband source of polarization and frequency entangled photons; and iii) a roadmap for further investigations of this source, including applications in quantum interferometry and high-precision optical metrology. The devices presented herein are quasi-phase-matched lithium niobate waveguides. Lithium niobate is a second-order nonlinear optical material and can mediate optical energy conversion to different wavelengths. This nonlinear effect is the basis of both quantum frequency conversion and entangled photon generation, and is enhanced by i) confining light in waveguides to increase conversion efficiency, and ii) quasi-phase matching, a technique for engineering the second-order nonlinear response by locally altering the direction of a material's polarization vector. Waveguides are formed by diffusing titanium into a lithium niobate wafer. Quasi-phase matching is achieved by electric field poling, with multiple stages of process development and optimization to fabricate the delicate structures necessary for broadband entangled photon generation. The results presented herein update and optimize past fabrication techniques, demonstrate novel optical devices, and propose future avenues for device development

  11. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  12. An Implementation of Bigraph Matching

    DEFF Research Database (Denmark)

    Glenstrup, Arne John; Damgaard, Troels Christoffer; Birkedal, Lars

    We describe a provably sound and complete matching algorithm for bigraphical reactive systems. The algorithm has been implemented in our BPL Tool, a first implementation of bigraphical reactive systems. We describe the tool and present a concrete example of how it can be used to simulate a model...

  13. IMPROVED MOCK GALAXY CATALOGS FOR THE DEEP2 GALAXY REDSHIFT SURVEY FROM SUBHALO ABUNDANCE AND ENVIRONMENT MATCHING

    Energy Technology Data Exchange (ETDEWEB)

    Gerke, Brian F.; Wechsler, Risa H.; Behroozi, Peter S. [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, M/S 29, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Cooper, Michael C. [Center for Galaxy Evolution, Department of Physics and Astronomy, University of California-Irvine, Irvine, CA 92697 (United States); Yan, Renbin [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Coil, Alison L., E-mail: bgerke@slac.stanford.edu [Center for Astrophysics and Space Sciences, University of California, San Diego, 9500 Gilman Dr., MC 0424, La Jolla, CA 92093 (United States)

    2013-09-15

    We develop empirical methods for modeling the galaxy population and populating cosmological N-body simulations with mock galaxies according to the observed properties of galaxies in survey data. We use these techniques to produce a new set of mock catalogs for the DEEP2 Galaxy Redshift Survey based on the output of the high-resolution Bolshoi simulation, as well as two other simulations with different cosmological parameters, all of which we release for public use. The mock-catalog creation technique uses subhalo abundance matching to assign galaxy luminosities to simulated dark-matter halos. It then adds color information to the resulting mock galaxies in a manner that depends on the local galaxy density, in order to reproduce the measured color-environment relation in the data. In the course of constructing the catalogs, we test various models for including scatter in the relation between halo mass and galaxy luminosity, within the abundance-matching framework. We find that there is no constant-scatter model that can simultaneously reproduce both the luminosity function and the autocorrelation function of DEEP2. This result has implications for galaxy-formation theory, and it restricts the range of contexts in which the mock catalogs can be usefully applied. Nevertheless, careful comparisons show that our new mock catalogs accurately reproduce a wide range of the other properties of the DEEP2 catalog, suggesting that they can be used to gain a detailed understanding of various selection effects in DEEP2.

  14. IMPROVED MOCK GALAXY CATALOGS FOR THE DEEP2 GALAXY REDSHIFT SURVEY FROM SUBHALO ABUNDANCE AND ENVIRONMENT MATCHING

    International Nuclear Information System (INIS)

    Gerke, Brian F.; Wechsler, Risa H.; Behroozi, Peter S.; Cooper, Michael C.; Yan, Renbin; Coil, Alison L.

    2013-01-01

    We develop empirical methods for modeling the galaxy population and populating cosmological N-body simulations with mock galaxies according to the observed properties of galaxies in survey data. We use these techniques to produce a new set of mock catalogs for the DEEP2 Galaxy Redshift Survey based on the output of the high-resolution Bolshoi simulation, as well as two other simulations with different cosmological parameters, all of which we release for public use. The mock-catalog creation technique uses subhalo abundance matching to assign galaxy luminosities to simulated dark-matter halos. It then adds color information to the resulting mock galaxies in a manner that depends on the local galaxy density, in order to reproduce the measured color-environment relation in the data. In the course of constructing the catalogs, we test various models for including scatter in the relation between halo mass and galaxy luminosity, within the abundance-matching framework. We find that there is no constant-scatter model that can simultaneously reproduce both the luminosity function and the autocorrelation function of DEEP2. This result has implications for galaxy-formation theory, and it restricts the range of contexts in which the mock catalogs can be usefully applied. Nevertheless, careful comparisons show that our new mock catalogs accurately reproduce a wide range of the other properties of the DEEP2 catalog, suggesting that they can be used to gain a detailed understanding of various selection effects in DEEP2

  15. Are Current Physical Match Performance Metrics in Elite Soccer Fit for Purpose or is the Adoption of an Integrated Approach Needed?

    Science.gov (United States)

    Bradley, Paul S; Ade, Jack D

    2018-01-18

    Time-motion analysis is a valuable data-collection technique used to quantify the physical match performance of elite soccer players. For over 40 years researchers have adopted a 'traditional' approach when evaluating match demands by simply reporting the distance covered or time spent along a motion continuum of walking through to sprinting. This methodology quantifies physical metrics in isolation without integrating other factors and this ultimately leads to a one-dimensional insight into match performance. Thus, this commentary proposes a novel 'integrated' approach that focuses on a sensitive physical metric such as high-intensity running but contextualizes this in relation to key tactical activities for each position and collectively for the team. In the example presented, the 'integrated' model clearly unveils the unique high-intensity profile that exists due to distinct tactical roles, rather than one-dimensional 'blind' distances produced by 'traditional' models. Intuitively this innovative concept may aid the coaches understanding of the physical performance in relation to the tactical roles and instructions given to the players. Additionally, it will enable practitioners to more effectively translate match metrics into training and testing protocols. This innovative model may well aid advances in other team sports that incorporate similar intermittent movements with tactical purpose. Evidence of the merits and application of this new concept are needed before the scientific community accepts this model as it may well add complexity to an area that conceivably needs simplicity.

  16. Pre-analysis techniques applied to area-based correlation aiming Digital Terrain Model generation

    Directory of Open Access Journals (Sweden)

    Maurício Galo

    2005-12-01

    Full Text Available Area-based matching is an useful procedure in some photogrammetric processes and its results are of crucial importance in applications such as relative orientation, phototriangulation and Digital Terrain Model generation. The successful determination of correspondence depends on radiometric and geometric factors. Considering these aspects, the use of procedures that previously estimate the quality of the parameters to be computed is a relevant issue. This paper describes these procedures and it is shown that the quality prediction can be computed before performing matching by correlation, trough the analysis of the reference window. This procedure can be incorporated in the correspondence process for Digital Terrain Model generation and Phototriangulation. The proposed approach comprises the estimation of the variance matrix of the translations from the gray levels in the reference window and the reduction of the search space using the knowledge of the epipolar geometry. As a consequence, the correlation process becomes more reliable, avoiding the application of matching procedures in doubtful areas. Some experiments with simulated and real data are presented, evidencing the efficiency of the studied strategy.

  17. Study of chromatic adaptation using memory color matches, Part I: neutral illuminants

    OpenAIRE

    Smet, Kevin A.G.; Zhai, Qiyan; Luo, Ming R.; Hanselaer, Peter

    2017-01-01

    Twelve corresponding color data sets have been obtained using the long-term memory colors of familiar objects as target stimuli. Data were collected for familiar objects with neutral, red, yellow, green and blue hues under 4 approximately neutral illumination conditions on or near the blackbody locus. The advantages of the memory color matching method are discussed in light of other more traditional asymmetric matching techniques. Results were compared to eight corresponding color data sets a...

  18. Reliability of visual and instrumental color matching.

    Science.gov (United States)

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in

  19. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    Science.gov (United States)

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  20. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features.

    Directory of Open Access Journals (Sweden)

    Seyed Mostafa Mousavi Kahaki

    Full Text Available An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics--such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient--are insufficient for achieving adequate results under different image deformations. Thus, new descriptor's similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence.

  1. Magnetic safety matches

    Science.gov (United States)

    Lindén, J.; Lindberg, M.; Greggas, A.; Jylhävuori, N.; Norrgrann, H.; Lill, J. O.

    2017-07-01

    In addition to the main ingredients; sulfur, potassium chlorate and carbon, ordinary safety matches contain various dyes, glues etc, giving the head of the match an even texture and appealing color. Among the common reddish-brown matches there are several types, which after ignition can be attracted by a strong magnet. Before ignition the match head is generally not attracted by the magnet. An elemental analysis based on proton-induced x-ray emission was performed to single out iron as the element responsible for the observed magnetism. 57Fe Mössbauer spectroscopy was used for identifying the various types of iron-compounds, present before and after ignition, responsible for the macroscopic magnetism: Fe2O3 before and Fe3O4 after. The reaction was verified by mixing the main chemicals in the match-head with Fe2O3 in glue and mounting the mixture on a match stick. The ash residue after igniting the mixture was magnetic.

  2. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  3. Early outcome in renal transplantation from large donors to small and size-matched recipients - a porcine experimental model

    DEFF Research Database (Denmark)

    Ravlo, Kristian; Chhoden, Tashi; Søndergaard, Peter

    2012-01-01

    in small recipients within 60 min after reperfusion. Interestingly, this was associated with a significant reduction in medullary RPP, while there was no significant change in the size-matched recipients. No difference was observed in urinary NGAL excretion between the groups. A significant higher level......Kidney transplantation from a large donor to a small recipient, as in pediatric transplantation, is associated with an increased risk of thrombosis and DGF. We established a porcine model for renal transplantation from an adult donor to a small or size-matched recipient with a high risk of DGF...... and studied GFR, RPP using MRI, and markers of kidney injury within 10 h after transplantation. After induction of BD, kidneys were removed from ∼63-kg donors and kept in cold storage for ∼22 h until transplanted into small (∼15 kg, n = 8) or size-matched (n = 8) recipients. A reduction in GFR was observed...

  4. Probabilistic Matching of Deidentified Data From a Trauma Registry and a Traumatic Brain Injury Model System Center: A Follow-up Validation Study.

    Science.gov (United States)

    Kumar, Raj G; Wang, Zhensheng; Kesinger, Matthew R; Newman, Mark; Huynh, Toan T; Niemeier, Janet P; Sperry, Jason L; Wagner, Amy K

    2018-04-01

    In a previous study, individuals from a single Traumatic Brain Injury Model Systems and trauma center were matched using a novel probabilistic matching algorithm. The Traumatic Brain Injury Model Systems is a multicenter prospective cohort study containing more than 14,000 participants with traumatic brain injury, following them from inpatient rehabilitation to the community over the remainder of their lifetime. The National Trauma Databank is the largest aggregation of trauma data in the United States, including more than 6 million records. Linking these two databases offers a broad range of opportunities to explore research questions not otherwise possible. Our objective was to refine and validate the previous protocol at another independent center. An algorithm generation and validation data set were created, and potential matches were blocked by age, sex, and year of injury; total probabilistic weight was calculated based on of 12 common data fields. Validity metrics were calculated using a minimum probabilistic weight of 3. The positive predictive value was 98.2% and 97.4% and sensitivity was 74.1% and 76.3%, in the algorithm generation and validation set, respectively. These metrics were similar to the previous study. Future work will apply the refined probabilistic matching algorithm to the Traumatic Brain Injury Model Systems and the National Trauma Databank to generate a merged data set for clinical traumatic brain injury research use.

  5. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  6. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum

    International Nuclear Information System (INIS)

    Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H

    2015-01-01

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)

  7. Influence of a Prolonged Tennis Match Play on Serve Biomechanics.

    Directory of Open Access Journals (Sweden)

    Caroline Martin

    Full Text Available The aim of this study was to quantify kinematic, kinetic and performance changes that occur in the serve throughout a prolonged tennis match play. Serves of eight male advanced tennis players were recorded with a motion capture system before, at mid-match, and after a 3-hour tennis match. Before and after each match, electromyographic data of 8 upper limb muscles obtained during isometric maximal voluntary contraction were compared to determine the presence of muscular fatigue. Vertical ground reaction forces, rating of perceived exertion, ball speed, and ball impact height were measured. Kinematic and upper limb kinetic variables were computed. The results show decrease in mean power frequency values for several upper limb muscles that is an indicator of local muscular fatigue. Decreases in serve ball speed, ball impact height, maximal angular velocities and an increase in rating of perceived exertion were also observed between the beginning and the end of the match. With fatigue, the majority of the upper limb joint kinetics decreases at the end of the match. No change in timing of maximal angular velocities was observed between the beginning and the end of the match. A prolonged tennis match play may induce fatigue in upper limb muscles, which decrease performance and cause changes in serve maximal angular velocities and joint kinetics. The consistency in timing of maximal angular velocities suggests that advanced tennis players are able to maintain the temporal pattern of their serve technique, in spite of the muscular fatigue development.

  8. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  9. UltiMatch-NL: a Web service matchmaker based on multiple semantic filters.

    Science.gov (United States)

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters.

  10. UltiMatch-NL: A Web Service Matchmaker Based on Multiple Semantic Filters

    Science.gov (United States)

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters. PMID:25157872

  11. Matching Games with Additive Externalities

    DEFF Research Database (Denmark)

    Branzei, Simina; Michalak, Tomasz; Rahwan, Talal

    2012-01-01

    Two-sided matchings are an important theoretical tool used to model markets and social interactions. In many real life problems the utility of an agent is influenced not only by their own choices, but also by the choices that other agents make. Such an influence is called an externality. Whereas ...

  12. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  13. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  14. Study of hydrogen-molecule guests in type II clathrate hydrates using a force-matched potential model parameterised from ab initio molecular dynamics

    Science.gov (United States)

    Burnham, Christian J.; Futera, Zdenek; English, Niall J.

    2018-03-01

    The force-matching method has been applied to parameterise an empirical potential model for water-water and water-hydrogen intermolecular interactions for use in clathrate-hydrate simulations containing hydrogen guest molecules. The underlying reference simulations constituted ab initio molecular dynamics (AIMD) of clathrate hydrates with various occupations of hydrogen-molecule guests. It is shown that the resultant model is able to reproduce AIMD-derived free-energy curves for the movement of a tagged hydrogen molecule between the water cages that make up the clathrate, thus giving us confidence in the model. Furthermore, with the aid of an umbrella-sampling algorithm, we calculate barrier heights for the force-matched model, yielding the free-energy barrier for a tagged molecule to move between cages. The barrier heights are reasonably large, being on the order of 30 kJ/mol, and are consistent with our previous studies with empirical models [C. J. Burnham and N. J. English, J. Phys. Chem. C 120, 16561 (2016) and C. J. Burnham et al., Phys. Chem. Chem. Phys. 19, 717 (2017)]. Our results are in opposition to the literature, which claims that this system may have very low barrier heights. We also compare results to that using the more ad hoc empirical model of Alavi et al. [J. Chem. Phys. 123, 024507 (2005)] and find that this model does very well when judged against the force-matched and ab initio simulation data.

  15. Job Matching and On-the-Job Training.

    OpenAIRE

    Barron, John M; Black, Dan A; Loewenstein, Mark A

    1989-01-01

    Conventional analysis predicts that workers pay part of their on-the-job training costs by accepting a lower starting wage and subsequently realize a return to this investment in the form of greater wage growth. Missing from the conventional treatment of on-the-job training is a discussion of the process by which heterogeneous worker s are matched to jobs requiring varying amounts of training. This matching process constitutes a key feature of the on-the-job training model that is presented i...

  16. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  17. mr. A C++ library for the matching and running of the Standard Model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kniehl, Bernd A.; Veretin, Oleg L. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Pikelner, Andrey F. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2016-01-15

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  18. Application of Convolution Perfectly Matched Layer in MRTD scattering model for non-spherical aerosol particles and its performance analysis

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Jiang, Zidong; Liu, Lei; Chen, Ming

    2017-10-01

    The performance of absorbing boundary condition (ABC) is an important factor influencing the simulation accuracy of MRTD (Multi-Resolution Time-Domain) scattering model for non-spherical aerosol particles. To this end, the Convolution Perfectly Matched Layer (CPML), an excellent ABC in FDTD scheme, is generalized and applied to the MRTD scattering model developed by our team. In this model, the time domain is discretized by exponential differential scheme, and the discretization of space domain is implemented by Galerkin principle. To evaluate the performance of CPML, its simulation results are compared with those of BPML (Berenger's Perfectly Matched Layer) and ADE-PML (Perfectly Matched Layer with Auxiliary Differential Equation) for spherical and non-spherical particles, and their simulation errors are analyzed as well. The simulation results show that, for scattering phase matrices, the performance of CPML is better than that of BPML; the computational accuracy of CPML is comparable to that of ADE-PML on the whole, but at scattering angles where phase matrix elements fluctuate sharply, the performance of CPML is slightly better than that of ADE-PML. After orientation averaging process, the differences among the results of different ABCs are reduced to some extent. It also can be found that ABCs have a much weaker influence on integral scattering parameters (such as extinction and absorption efficiencies) than scattering phase matrices, this phenomenon can be explained by the error averaging process in the numerical volume integration.

  19. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    Science.gov (United States)

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Employment Effects of Service Offshoring: Evidence from Matched Firms

    OpenAIRE

    Rosario Crinò

    2009-01-01

    This paper studies the effects of service offshoring on the level and skill composition of domestic employment, using a rich data set of Italian firms and propensity score matching techniques. The results show that service offshoring has no effect on the level of employment but changes its composition in favor of high skilled workers.

  1. PENGARUH MODEL PEMBELAJARAN KOOPERATIF MAKE A MATCH BERBANTUAN SLIDE SHARE TERHADAP HASIL BELAJAR KOGNITIF IPS DAN KETERAMPILAN SOSIAL

    Directory of Open Access Journals (Sweden)

    Udin Cahya Ari Prastya

    2016-08-01

    Full Text Available This research is conducted due to the problems faced by the fifth graders of Ampelgading 01 Public Elementary School. They find difficulties in understanding social science subject, indicated by students’ learning outcomes. Only 5% students of class pass the Minimum Passing Criteria of 70. Teacher-centered learning decreases the interaction between teachers and students and students with students, which related to the development of social skills such. Therefore, interactive learning model is needed to build good classroom atmosphere and improve students’ interactions. One model of interactive learning is a Make a Match.This research used quantitative and quasi-experiment methods, Quasi-experimental design used is nonequivalent control group design, using independent t-test assisted with SPSS 16 software for data analysis.The research result presents following the treatment in experimental class using cooperative teaching model ‘Make a Match’ using slide share, average grade of posttest obtained from control group is 66,15 while the experimental class gained the average of 75,18; control class obtained social skills scores with the average of 45 and 61 for experimental class. t test result indicates the cognitive learning measured from gain score of pretest and posttest have significant value of 0.000 and social skills shows significant value of 0.000. It is known that 0.000> 0.05, indicates that is related to the effect of cooperative teaching model ‘Make a Match’ using slide share to the social science cognitive and social skill. Pelaksanaan penelitian ini dikarenakan adanya masalah yang dihadapi oleh siswa kelas V di SDN Ampelgading 01. Meraka merasa kesulitas dalam memahami materi mata pelajaran IPS, hal ini dibuktikan dengan nilai hasil belajar siswa yang mendapatkan nilai di atas KKM dengan nilai KKM 70 hanya 5% dari jumlah total keseluruan siswa. Pembelajaran guru yang bersifat aksi menimbulkan tidak adanya interaksi antara

  2. Efficient Topological Localization Using Global and Local Feature Matching

    Directory of Open Access Journals (Sweden)

    Junqiu Wang

    2013-03-01

    Full Text Available We present an efficient vision-based global topological localization approach in which different image features are used in a coarse-to-fine matching framework. Orientation Adjacency Coherence Histogram (OACH, a novel image feature, is proposed to improve the coarse localization. The coarse localization results are taken as inputs for the fine localization which is carried out by matching Harris-Laplace interest points characterized by the SIFT descriptor. The computation of OACHs and interest points is efficient due to the fact that these features are computed in an integrated process. The matching of local features is improved by using approximate nearest neighbor searching technique. We have implemented and tested the localization system in real environments. The experimental results demonstrate that our approach is efficient and reliable in both indoor and outdoor environments. This work has also been compared with previous works. The comparison results show that our approach has better performance with higher correct ratio and lower computational complexity.

  3. Best matching theory & applications

    CERN Document Server

    Moghaddam, Mohsen

    2017-01-01

    Mismatch or best match? This book demonstrates that best matching of individual entities to each other is essential to ensure smooth conduct and successful competitiveness in any distributed system, natural and artificial. Interactions must be optimized through best matching in planning and scheduling, enterprise network design, transportation and construction planning, recruitment, problem solving, selective assembly, team formation, sensor network design, and more. Fundamentals of best matching in distributed and collaborative systems are explained by providing: § Methodical analysis of various multidimensional best matching processes § Comprehensive taxonomy, comparing different best matching problems and processes § Systematic identification of systems’ hierarchy, nature of interactions, and distribution of decision-making and control functions § Practical formulation of solutions based on a library of best matching algorithms and protocols, ready for direct applications and apps development. Design...

  4. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  5. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  6. The Basic Economics of Match Fixing in Sport Tournaments

    OpenAIRE

    Raul Caruso

    2009-01-01

    Match-fixing is a recurring phenomenon of sport contests. This paper presents a simple formal model in order to explain them. The intuition behind is that an asymmetry in the evaluation of the stake is the key factor leading to match-fixing or to tacit collusion. In particular, it will be demonstrated that an asymmetry in the evaluation of the stake can lead to a concession from one agent to the other and then to a match-fixing. It is also demonstrated that when the asymmetry in the evaluatio...

  7. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  8. Team Performance Indicators Explain Outcome during Women’s Basketball Matches at the Olympic Games

    Directory of Open Access Journals (Sweden)

    Anthony S. Leicht

    2017-12-01

    Full Text Available The Olympic Games is the pinnacle international sporting competition with team sport coaches interested in key performance indicators to assist the development of match strategies for success. This study examined the relationship between team performance indicators and match outcome during the women’s basketball tournament at the Olympic Games. Team performance indicators were collated from all women’s basketball matches during the 2004–2016 Olympic Games (n = 156 and analyzed via linear (binary logistic regression and non-linear (conditional interference (CI classification tree statistical techniques. The most parsimonious linear model retained “defensive rebounds”, “field-goal percentage”, “offensive rebounds”, “fouls”, “steals”, and “turnovers” with a classification accuracy of 85.6%. The CI classification tree retained four performance indicators with a classification accuracy of 86.2%. The combination of “field-goal percentage”, “defensive rebounds”, “steals”, and “turnovers” provided the greatest probability of winning (91.1%, while a combination of “field-goal percentage”, “steals”, and “turnovers” provided the greatest probability of losing (96.7%. Shooting proficiency and defensive actions were identified as key team performance indicators for Olympic female basketball success. The development of key defensive strategies and/or the selection of athletes highly proficient in defensive actions may strengthen Olympic match success. Incorporation of non-linear analyses may provide teams with superior/practical approaches for elite sporting success.

  9. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction

    Science.gov (United States)

    Iwaszczuk, Dorota; Stilla, Uwe

    2017-10-01

    Thermal infrared (TIR) images are often used to picture damaged and weak spots in the insulation of the building hull, which is widely used in thermal inspections of buildings. Such inspection in large-scale areas can be carried out by combining TIR imagery and 3D building models. This combination can be achieved via texture mapping. Automation of texture mapping avoids time consuming imaging and manually analyzing each face independently. It also provides a spatial reference for façade structures extracted in the thermal textures. In order to capture all faces, including the roofs, façades, and façades in the inner courtyard, an oblique looking camera mounted on a flying platform is used. Direct geo-referencing is usually not sufficient for precise texture extraction. In addition, 3D building models have also uncertain geometry. In this paper, therefore, methodology for co-registration of uncertain 3D building models with airborne oblique view images is presented. For this purpose, a line-based model-to-image matching is developed, in which the uncertainties of the 3D building model, as well as of the image features are considered. Matched linear features are used for the refinement of the exterior orientation parameters of the camera in order to ensure optimal co-registration. Moreover, this study investigates whether line tracking through the image sequence supports the matching. The accuracy of the extraction and the quality of the textures are assessed. For this purpose, appropriate quality measures are developed. The tests showed good results on co-registration, particularly in cases where tracking between the neighboring frames had been applied.

  10. A matching-allele model explains host resistance to parasites.

    Science.gov (United States)

    Luijckx, Pepijn; Fienberg, Harris; Duneau, David; Ebert, Dieter

    2013-06-17

    The maintenance of genetic variation and sex despite its costs has long puzzled biologists. A popular idea, the Red Queen Theory, is that under rapid antagonistic coevolution between hosts and their parasites, the formation of new rare host genotypes through sex can be advantageous as it creates host genotypes to which the prevailing parasite is not adapted. For host-parasite coevolution to lead to an ongoing advantage for rare genotypes, parasites should infect specific host genotypes and hosts should resist specific parasite genotypes. The most prominent genetics capturing such specificity are matching-allele models (MAMs), which have the key feature that resistance for two parasite genotypes can reverse by switching one allele at one host locus. Despite the lack of empirical support, MAMs have played a central role in the theoretical development of antagonistic coevolution, local adaptation, speciation, and sexual selection. Using genetic crosses, we show that resistance of the crustacean Daphnia magna against the parasitic bacterium Pasteuria ramosa follows a MAM. Simulation results show that the observed genetics can explain the maintenance of genetic variation and contribute to the maintenance of sex in the facultatively sexual host as predicted by the Red Queen Theory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Application of the stress wave method to automatic signal matching and to statnamic predictions

    NARCIS (Netherlands)

    Esposito, G.; Courage, W.M.G.; Foeken, R.J. van

    2000-01-01

    The Statnamic method is an increasingly popular technique to carry out loading tests on cast in-situ piles. The method bas proved to be a cost-effective alternative to a static loading test. As-sociated to Unloading Point Method (UPM) and to automatie signal matching, the Statnamic testing technique

  12. Dispersion and betatron matching into the linac

    International Nuclear Information System (INIS)

    Decker, F.J.; Adolphsen, C.; Corbett, W.J.; Emma, P.; Hsu, I.; Moshammer, H.; Seeman, J.T.; Spence, W.L.

    1991-05-01

    In high energy linear colliders, the low emittance beam from a damping ring has to be preserved all the way to the linac, in the linac and to the interaction point. In particular, the Ring-To-Linac (RTL) section of the SLAC Linear Collider (SLC) should provide an exact betatron and dispersion match from the damping ring to the linac. A beam with a non-zero dispersion shows up immediately as an increased emittance, while with a betatron mismatch the beam filaments in the linac. Experimental tests and tuning procedures have shown that the linearized beta matching algorithms are insufficient if the actual transport line has some unknown errors not included in the model. Also, adjusting quadrupole strengths steers the beam if it is offset in the quadrupole magnets. These and other effects have lead to a lengthy tuning process, which in the end improves the matching, but is not optimal. Different ideas will be discussed which should improve this matching procedure and make it a more reliable, faster and simpler process. 5 refs., 2 figs

  13. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  14. Plasma focus matching conditions

    International Nuclear Information System (INIS)

    Soliman, H.M.; Masoud, M.M.; Elkhalafawy, T.A.

    1988-01-01

    A snow-plough and slug models have been used to obtain the optimum matching conditions of the plasma in the focus. The dimensions of the plasma focus device are, inner electrode radius = 2 cm, outer electrode radius = 5.5 cm, and its length = 8 cm. It was found that the maximum magnetic energy of 12.26 kJ has to be delivered to plasma focus whose density is 10 19 /cm 3 at focusing time of 2.55 μs and with total external inductance of 24.2 n H. The same method is used to evaluate the optimum matching conditions for the previous coaxial discharge system which had inner electrode radius = 1.6 cm, outer electrode radius = 3.3 cm and its length = 31.5 cm. These conditions are charging voltage = 12 kV, capacity of the condenser bank = 430 μf, plasma focus density = 10 19 /cm 3 focusing time = 8 μs and total external inductance = 60.32 n H.3 fig., 2 tab

  15. Matching Students to Schools

    Directory of Open Access Journals (Sweden)

    Dejan Trifunovic

    2017-08-01

    Full Text Available In this paper, we present the problem of matching students to schools by using different matching mechanisms. This market is specific since public schools are free and the price mechanism cannot be used to determine the optimal allocation of children in schools. Therefore, it is necessary to use different matching algorithms that mimic the market mechanism and enable us to determine the core of the cooperative game. In this paper, we will determine that it is possible to apply cooperative game theory in matching problems. This review paper is based on illustrative examples aiming to compare matching algorithms in terms of the incentive compatibility, stability and efficiency of the matching. In this paper we will present some specific problems that may occur in matching, such as improving the quality of schools, favoring minority students, the limited length of the list of preferences and generating strict priorities from weak priorities.

  16. Myocardium tracking via matching distributions.

    Science.gov (United States)

    Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali

    2009-01-01

    The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not

  17. Matching methods evaluation framework for stereoscopic breast x-ray images.

    Science.gov (United States)

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.

  18. Hybrid silicon mode-locked laser with improved RF power by impedance matching

    Science.gov (United States)

    Tossoun, Bassem; Derickson, Dennis; Srinivasan, Sudharsanan; Bowers, John

    2015-02-01

    We design and discuss an impedance matching solution for a hybrid silicon mode-locked laser diode (MLLD) to improve peak optical power coming from the device. In order to develop an impedance matching solution, a thorough measurement and analysis of the MLLD as a function of bias on each of the laser segments was carried out. A passive component impedance matching network was designed at the operating frequency of 20 GHz to optimize RF power delivery to the laser. The hybrid silicon laser was packaged together in a module including the impedance matching circuit. The impedance matching design resulted in a 6 dB (electrical) improvement in the detected modulation spectrum power, as well as approximately a 10 dB phase noise improvement, from the MLLD. Also, looking ahead to possible future work, we discuss a Step Recovery Diode (SRD) driven impulse generator, which wave-shapes the RF drive to achieve efficient injection. This novel technique addresses the time varying impedance of the absorber as the optical pulse passes through it, to provide optimum optical pulse shaping.

  19. 3D OBJECT COORDINATES EXTRACTION BY RADARGRAMMETRY AND MULTI STEP IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    A. Eftekhari

    2013-09-01

    Full Text Available Nowadays by high resolution SAR imaging systems as Radarsat-2, TerraSAR-X and COSMO-skyMed, three-dimensional terrain data extraction using SAR images is growing. InSAR and Radargrammetry are two most common approaches for removing 3D object coordinate from SAR images. Research has shown that extraction of terrain elevation data using satellite repeat pass interferometry SAR technique due to atmospheric factors and the lack of coherence between the images in areas with dense vegetation cover is a problematic. So the use of Radargrammetry technique can be effective. Generally height derived method by Radargrammetry consists of two stages: Images matching and space intersection. In this paper we propose a multi-stage algorithm founded on the combination of feature based and area based image matching. Then the RPCs that calculate for each images use for extracting 3D coordinate in matched points. At the end, the coordinates calculating that compare with coordinates extracted from 1 meters DEM. The results show root mean square errors for 360 points are 3.09 meters. We use a pair of spotlight TerraSAR-X images from JAM (IRAN in this article.

  20. Matching Properties of Femtofarad and Sub-Femtofarad MOM Capacitors

    KAUST Repository

    Omran, Hesham

    2016-04-21

    Small metal-oxide-metal (MOM) capacitors are essential to energy-efficient mixed-signal integrated circuit design. However, only few reports discuss their matching properties based on large sets of measured data. In this paper, we report matching properties of femtofarad and sub-femtofarad MOM vertical-field parallel-plate capacitors and lateral-field fringing capacitors. We study the effect of both the finger-length and finger-spacing on the mismatch of lateral-field capacitors. In addition, we compare the matching properties and the area efficiency of vertical-field and lateral-field capacitors. We use direct mismatch measurement technique, and we illustrate its feasibility using experimental measurements and Monte Carlo simulations. The test-chips are fabricated in a 0.18 \\\\mutext{m} CMOS process. A large number of test structures is characterized (4800 test structures), which improves the statistical reliability of the extracted mismatch information. Despite conventional wisdom, extensive measurements show that vertical-field and lateral-field MOM capacitors have the same matching properties when the actual capacitor area is considered. Measurements show that the mismatch depends on the capacitor area but not on the spacing; thus, for a given mismatch specification, the lateral-field MOM capacitor can have arbitrarily small capacitance by increasing the spacing between the capacitor fingers, at the expense of increased chip area.

  1. Matching Properties of Femtofarad and Sub-Femtofarad MOM Capacitors

    KAUST Repository

    Omran, Hesham; Alahmadi, Hamzah; Salama, Khaled N.

    2016-01-01

    Small metal-oxide-metal (MOM) capacitors are essential to energy-efficient mixed-signal integrated circuit design. However, only few reports discuss their matching properties based on large sets of measured data. In this paper, we report matching properties of femtofarad and sub-femtofarad MOM vertical-field parallel-plate capacitors and lateral-field fringing capacitors. We study the effect of both the finger-length and finger-spacing on the mismatch of lateral-field capacitors. In addition, we compare the matching properties and the area efficiency of vertical-field and lateral-field capacitors. We use direct mismatch measurement technique, and we illustrate its feasibility using experimental measurements and Monte Carlo simulations. The test-chips are fabricated in a 0.18 \\mutext{m} CMOS process. A large number of test structures is characterized (4800 test structures), which improves the statistical reliability of the extracted mismatch information. Despite conventional wisdom, extensive measurements show that vertical-field and lateral-field MOM capacitors have the same matching properties when the actual capacitor area is considered. Measurements show that the mismatch depends on the capacitor area but not on the spacing; thus, for a given mismatch specification, the lateral-field MOM capacitor can have arbitrarily small capacitance by increasing the spacing between the capacitor fingers, at the expense of increased chip area.

  2. Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)

    Science.gov (United States)

    Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab

    2017-08-01

    The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.

  3. Evaluation of goal kicking performance in international rugby union matches.

    Science.gov (United States)

    Quarrie, Kenneth L; Hopkins, Will G

    2015-03-01

    Goal kicking is an important element in rugby but has been the subject of minimal research. To develop and apply a method to describe the on-field pattern of goal-kicking and rank the goal kicking performance of players in international rugby union matches. Longitudinal observational study. A generalized linear mixed model was used to analyze goal-kicking performance in a sample of 582 international rugby matches played from 2002 to 2011. The model adjusted for kick distance, kick angle, a rating of the importance of each kick, and venue-related conditions. Overall, 72% of the 6769 kick attempts were successful. Forty-five percent of points scored during the matches resulted from goal kicks, and in 5.7% of the matches the result of the match hinged on the outcome of a kick attempt. There was an extremely large decrease in success with increasing distance (odds ratio for two SD distance 0.06, 90% confidence interval 0.05-0.07) and a small decrease with increasingly acute angle away from the mid-line of the goal posts (odds ratio for 2 SD angle, 0.44, 0.39-0.49). Differences between players were typically small (odds ratio for 2 between-player SD 0.53, 0.45-0.65). The generalized linear mixed model with its random-effect solutions provides a tool for ranking the performance of goal kickers in rugby. This modelling approach could be applied to other performance indicators in rugby and in other sports in which discrete outcomes are measured repeatedly on players or teams. Copyright © 2015. Published by Elsevier Ltd.

  4. Matched case-control studies: a review of reported statistical methodology

    Directory of Open Access Journals (Sweden)

    Niven DJ

    2012-04-01

    Full Text Available Daniel J Niven1, Luc R Berthiaume2, Gordon H Fick1, Kevin B Laupland11Department of Critical Care Medicine, Peter Lougheed Centre, Calgary, 2Department of Community Health Sciences, University of Calgary, Calgary, Alberta, CanadaBackground: Case-control studies are a common and efficient means of studying rare diseases or illnesses with long latency periods. Matching of cases and controls is frequently employed to control the effects of known potential confounding variables. The analysis of matched data requires specific statistical methods.Methods: The objective of this study was to determine the proportion of published, peer reviewed matched case-control studies that used statistical methods appropriate for matched data. Using a comprehensive set of search criteria we identified 37 matched case-control studies for detailed analysis.Results: Among these 37 articles, only 16 studies were analyzed with proper statistical techniques (43%. Studies that were properly analyzed were more likely to have included case patients with cancer and cardiovascular disease compared to those that did not use proper statistics (10/16 or 63%, versus 5/21 or 24%, P = 0.02. They were also more likely to have matched multiple controls for each case (14/16 or 88%, versus 13/21 or 62%, P = 0.08. In addition, studies with properly analyzed data were more likely to have been published in a journal with an impact factor listed in the top 100 according to the Journal Citation Reports index (12/16 or 69%, versus 1/21 or 5%, P ≤ 0.0001.Conclusion: The findings of this study raise concern that the majority of matched case-control studies report results that are derived from improper statistical analyses. This may lead to errors in estimating the relationship between a disease and exposure, as well as the incorrect adaptation of emerging medical literature.Keywords: case-control, matched, dependent data, statistics

  5. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  6. GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method

    Science.gov (United States)

    Marotta, G. S.

    2017-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).

  7. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  8. Study of chromatic adaptation using memory color matches, Part I: neutral illuminants.

    Science.gov (United States)

    Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter

    2017-04-03

    Twelve corresponding color data sets have been obtained using the long-term memory colors of familiar objects as target stimuli. Data were collected for familiar objects with neutral, red, yellow, green and blue hues under 4 approximately neutral illumination conditions on or near the blackbody locus. The advantages of the memory color matching method are discussed in light of other more traditional asymmetric matching techniques. Results were compared to eight corresponding color data sets available in literature. The corresponding color data was used to test several linear (von Kries, RLAB, etc.) and nonlinear (Hunt & Nayatani) chromatic adaptation transforms (CAT). It was found that a simple two-step von Kries, whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors, outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color sets. The predictive errors were substantially smaller than the standard uncertainty on the average observer and were comparable to what are considered just-noticeable-differences in the CIE u'v' chromaticity diagram, supporting the use of memory color based internal references to study chromatic adaptation mechanisms.

  9. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  10. Mix-and-match holography

    KAUST Repository

    Peng, Yifan; Dun, Xiong; Sun, Qilin; Heidrich, Wolfgang

    2017-01-01

    target images into pairs of front and rear phase-distorting surfaces. Different target holograms can be decoded by mixing and matching different front and rear surfaces under specific geometric alignments. Our approach, which we call mixWe derive a detailed image formation model for the setting of holographic projection displays, as well as a multiplexing method based on a combination of phase retrieval methods and complex matrix factorization. We demonstrate several application scenarios in both simulation and physical prototypes.

  11. Numerical experiment on finite element method for matching data

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Kumakura, Toshimasa; Yoshimura, Koichi.

    1993-03-01

    Numerical experiments are presented on the finite element method by Pletzer-Dewar for matching data of an ordinary differential equation with regular singular points by using model equation. Matching data play an important role in nonideal MHD stability analysis of a magnetically confined plasma. In the Pletzer-Dewar method, the Frobenius series for the 'big solution', the fundamental solution which is not square-integrable at the regular singular point, is prescribed. The experiments include studies of the convergence rate of the matching data obtained by the finite element method and of the effect on the results of computation by truncating the Frobenius series at finite terms. It is shown from the present study that the finite element method is an effective method for obtaining the matching data with high accuracy. (author)

  12. New Ghost-node method for linking different models with varied grid refinement

    International Nuclear Information System (INIS)

    Mehl, Steffen W.; Hill, Mary Catherine; James, Scott Carlton; Leake, Stanley A.; Zyvoloski, George A.; Dickinson, Jesse E.; Eddebbarh, Al A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined 'child' model that is contained within a larger and coarser 'parent' model that is based on the iterative method of Mehl and Hill (2002, 2004). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has either matching grids (parent cells border an integer number of child cells; Figure 2a) or non-matching grids (parent cells border a non-integer number of child cells; Figure 2b). The coupled grids are simulated using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models (Mehl and Hill, 2002). When the grids are non-matching, model accuracy is slightly increased over matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to accurately couple distinct models because the overall error is less than if only the regional model was used to simulate flow in the child model's domain

  13. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    Science.gov (United States)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of

  14. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  15. On Resolution Complexity of Matching Principles

    DEFF Research Database (Denmark)

    Dantchev, Stefan S.

    proof system. The results in the thesis fall in this category. We study the Resolution complexity of some Matching Principles. The three major contributions of the thesis are as follows. Firstly, we develop a general technique of proving resolution lower bounds for the perfect matchingprinciples based...... Chessboard as well as for Tseitin tautologies based on rectangular grid graph. We reduce these problems to Tiling games, a concept introduced by us, which may be of interest on its own. Secondly, we find the exact Tree-Resolution complexity of the Weak Pigeon-Hole Principle. It is the most studied...

  16. Production Efficiency and Market Orientation in Food Crops in North West Ethiopia: Application of Matching Technique for Impact Assessment.

    Directory of Open Access Journals (Sweden)

    Habtamu Yesigat Ayenew

    Full Text Available Agricultural technologies developed by national and international research institutions were not benefiting the rural population of Ethiopia to the extent desired. As a response, integrated agricultural extension approaches are proposed as a key strategy to transform the smallholder farming sector. Improving Productivity and Market Success (IPMS of Ethiopian Farmers project is one of the development projects initiated by integrating productivity enhancement technological schemes with market development model. This paper explores the impact of the project intervention in the smallholder farmers' wellbeing.To test the research hypothesis of whether the project brought a significant change in the input use, marketed surplus, efficiency and income of farm households, we use a cross-section data from 200 smallholder farmers in Northwest Ethiopia, collected through multi-stage sampling procedure. To control for self-selection from observable characteristics of the farm households, we employ Propensity Score Matching (PSM. We finally use Data Envelopment Analysis (DEA techniques to estimate technical efficiency of farm households.The outcome of the research is in line with the premises that the participation of the household in the IPMS project improves purchased input use, marketed surplus, efficiency of farms and the overall gain from farming. The participant households on average employ more purchased agricultural inputs and gain higher gross margin from the production activities as compared to the non-participant households. The non-participant households on average supply less output (measured both in monetary terms and proportion of total produce to the market as compared to their participant counterparts. Except for the technical efficiency of production in potato, project participant households are better-off in production efficiency compared with the non-participant counterparts.We verified the idea that Improving Productivity and Market

  17. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    Science.gov (United States)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  18. Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.

    Science.gov (United States)

    Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun

    2017-07-01

    In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.

  19. Steelmaking-Casting of Molten Steel by Decarburization Ladle Matching

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2018-01-01

    Full Text Available Steelmaking–continuous casting is a complex process. The method of selecting a ladle, which also functions as a storage device, follows a specific process of the production plan. In ladle matching, several ladle attributes are considered. However, matching objectives are difficult to achieve simultaneously. Different molten steel properties have contributed to the complexity of matching constraints, and, thus, matching optimization is regarded a multiconflict goal problem. In the process of optimization, the first-order rule learning method is first used to extract key ladle attributes (performance indicators, including highest temperature, usage frequency, lowest-level material, and outlet. On the basis of a number of indicators, such as ladle temperature, quantity, material, and usage frequency, as well as skateboard quantity, the ladle matching model is established. Second, the rule of ladle selection is determined by the method of least-generalization rule learning. Third, a simulation experiment is carried out according to various scheduling order strategies and matching priority combinations. Finally, the heuristic ladle matching method based on the rule priority (RP is determined for possible industrial applications. Results show that the accuracy of ladle selection can be improved. In particular, the numbers of ladles and maintenance times are reduced. Consequently, furnace production efficiency is also enhanced.

  20. An effective approach for iris recognition using phase-based image matching.

    Science.gov (United States)

    Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi

    2008-10-01

    This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.

  1. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH

    International Nuclear Information System (INIS)

    Volk, Jochen; Herrmann, Torsten; Wuethrich, Kurt

    2008-01-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness

  2. Multi-Criterion Two-Sided Matching of Public–Private Partnership Infrastructure Projects: Criteria and Methods

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-04-01

    Full Text Available Two kinds of evaluative criteria are associated with Public–Private Partnership (PPP infrastructure projects, i.e., private evaluative criteria and public evaluative criteria. These evaluative criteria are inversely related, that is, the higher the public benefits; the lower the private surplus. To balance evaluative criteria in the Two-Sided Matching (TSM decision, this paper develops a quantitative matching decision model to select an optimal matching scheme for PPP infrastructure projects based on the Hesitant Fuzzy Set (HFS under unknown evaluative criterion weights. In the model, HFS is introduced to describe values of the evaluative criteria and multi-criterion information is fully considered given by groups. The optimal model is built and solved by maximizing the whole deviation of each criterion so that the evaluative criterion weights are determined objectively. Then, the match-degree of the two sides is calculated and a multi-objective optimization model is introduced to select an optimal matching scheme via a min-max approach. The results provide new insights and implications of the influence on evaluative criteria in the TSM decision.

  3. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  4. Multi-data reservoir history matching of crosswell seismic, electromagnetics and gravimetry data

    KAUST Repository

    Katterbauer, Klemens

    2014-01-01

    Reservoir engineering has become of prime importance for oil and gas field development projects. With rising complexity, reservoir simulations and history matching have become critical for fine-tuning reservoir production strategies, improved subsurface formation knowledge and forecasting remaining reserves. The sparse spatial sampling of production data has posed a significant challenge for reducing uncertainty of subsurface parameters. Seismic, electromagnetic and gravimetry techniques have found widespread application in enhancing exploration for oil and gas and monitor reservoirs, however these data have been interpreted and analyzed mostly separately rarely utilizing the synergy effects that may be attainable. With the incorporation of multiple data into the reservoir history matching process there has been the request knowing the impact each incorporated observation has on the estimation. We present multi-data ensemble-based history matching framework for the incorporation of multiple data such as seismic, electromagnetics, and gravimetry for improved reservoir history matching and provide an adjointfree ensemble sensitivity method to compute the impact of each observation on the estimated reservoir parameters. The incorporation of all data sets displays the advantages multiple data may provide for enhancing reservoir understanding and matching, with the impact of each data set on the matching improvement being determined by the ensemble sensitivity method.

  5. Hierarchical Matching of Traffic Information Services Using Semantic Similarity

    Directory of Open Access Journals (Sweden)

    Zongtao Duan

    2018-01-01

    Full Text Available Service matching aims to find the information similar to a given query, which has numerous applications in web search. Although existing methods yield promising results, they are not applicable for transportation. In this paper, we propose a multilevel matching method based on semantic technology, towards efficiently searching the traffic information requested. Our approach is divided into two stages: service clustering, which prunes candidate services that are not promising, and functional matching. The similarity at function level between services is computed by grouping the connections between the services into inheritance and noninheritance relationships. We also developed a three-layer framework with a semantic similarity measure that requires less time and space cost than existing method since the scale of candidate services is significantly smaller than the whole transportation network. The OWL_TC4 based service set was used to verify the proposed approach. The accuracy of offline service clustering reached 93.80%, and it reduced the response time to 651 ms when the total number of candidate services was 1000. Moreover, given the different thresholds for the semantic similarity measure, the proposed mixed matching model did better in terms of recall and precision (i.e., up to 72.7% and 80%, respectively, for more than 1000 services compared to the compared models based on information theory and taxonomic distance. These experimental results confirmed the effectiveness and validity of service matching for responding quickly and accurately to user queries.

  6. Matching Behavior as a Tradeoff Between Reward Maximization and Demands on Neural Computation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jan Kubanek

    2015-10-01

    Full Text Available When faced with a choice, humans and animals commonly distribute their behavior in proportion to the frequency of payoff of each option. Such behavior is referred to as matching and has been captured by the matching law. However, matching is not a general law of economic choice. Matching in its strict sense seems to be specifically observed in tasks whose properties make matching an optimal or a near-optimal strategy. We engaged monkeys in a foraging task in which matching was not the optimal strategy. Over-matching the proportions of the mean offered reward magnitudes would yield more reward than matching, yet, surprisingly, the animals almost exactly matched them. To gain insight into this phenomenon, we modeled the animals' decision-making using a mechanistic model. The model accounted for the animals' macroscopic and microscopic choice behavior. When the models' three parameters were not constrained to mimic the monkeys' behavior, the model over-matched the reward proportions and in doing so, harvested substantially more reward than the monkeys. This optimized model revealed a marked bottleneck in the monkeys' choice function that compares the value of the two options. The model featured a very steep value comparison function relative to that of the monkeys. The steepness of the value comparison function had a profound effect on the earned reward and on the level of matching. We implemented this value comparison function through responses of simulated biological neurons. We found that due to the presence of neural noise, steepening the value comparison requires an exponential increase in the number of value-coding neurons. Matching may be a compromise between harvesting satisfactory reward and the high demands placed by neural noise on optimal neural computation.

  7. Latent palmprint matching.

    Science.gov (United States)

    Jain, Anil K; Feng, Jianjiang

    2009-06-01

    The evidential value of palmprints in forensic applications is clear as about 30 percent of the latents recovered from crime scenes are from palms. While biometric systems for palmprint-based personal authentication in access control type of applications have been developed, they mostly deal with low-resolution (about 100 ppi) palmprints and only perform full-to-full palmprint matching. We propose a latent-to-full palmprint matching system that is needed in forensic applications. Our system deals with palmprints captured at 500 ppi (the current standard in forensic applications) or higher resolution and uses minutiae as features to be compatible with the methodology used by latent experts. Latent palmprint matching is a challenging problem because latent prints lifted at crime scenes are of poor image quality, cover only a small area of the palm, and have a complex background. Other difficulties include a large number of minutiae in full prints (about 10 times as many as fingerprints), and the presence of many creases in latents and full prints. A robust algorithm to reliably estimate the local ridge direction and frequency in palmprints is developed. This facilitates the extraction of ridge and minutiae features even in poor quality palmprints. A fixed-length minutia descriptor, MinutiaCode, is utilized to capture distinctive information around each minutia and an alignment-based minutiae matching algorithm is used to match two palmprints. Two sets of partial palmprints (150 live-scan partial palmprints and 100 latent palmprints) are matched to a background database of 10,200 full palmprints to test the proposed system. Despite the inherent difficulty of latent-to-full palmprint matching, rank-1 recognition rates of 78.7 and 69 percent, respectively, were achieved in searching live-scan partial palmprints and latent palmprints against the background database.

  8. AN AERIAL-IMAGE DENSE MATCHING APPROACH BASED ON OPTICAL FLOW FIELD

    Directory of Open Access Journals (Sweden)

    W. Yuan

    2016-06-01

    Full Text Available Dense matching plays an important role in many fields, such as DEM (digital evaluation model producing, robot navigation and 3D environment reconstruction. Traditional approaches may meet the demand of accuracy. But the calculation time and out puts density is hardly be accepted. Focus on the matching efficiency and complex terrain surface matching feasibility an aerial image dense matching method based on optical flow field is proposed in this paper. First, some high accurate and uniformed control points are extracted by using the feature based matching method. Then the optical flow is calculated by using these control points, so as to determine the similar region between two images. Second, the optical flow field is interpolated by using the multi-level B-spline interpolation in the similar region and accomplished the pixel by pixel coarse matching. Final, the results related to the coarse matching refinement based on the combined constraint, which recognizes the same points between images. The experimental results have shown that our method can achieve per-pixel dense matching points, the matching accuracy achieves sub-pixel level, and fully meet the three-dimensional reconstruction and automatic generation of DSM-intensive matching’s requirements. The comparison experiments demonstrated that our approach’s matching efficiency is higher than semi-global matching (SGM and Patch-based multi-view stereo matching (PMVS which verifies the feasibility and effectiveness of the algorithm.

  9. Effects of Different Missing Data Imputation Techniques on the Performance of Undiagnosed Diabetes Risk Prediction Models in a Mixed-Ancestry Population of South Africa.

    Directory of Open Access Journals (Sweden)

    Katya L Masconi

    Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.

  10. Application of nonliner reduction techniques in chemical process modeling: a review

    International Nuclear Information System (INIS)

    Muhaimin, Z; Aziz, N.; Abd Shukor, S.R.

    2006-01-01

    Model reduction techniques have been used widely in engineering fields for electrical, mechanical as well as chemical engineering. The basic idea of reduction technique is to replace the original system by an approximating system with much smaller state-space dimension. A reduced order model is more beneficial to process and industrial field in terms of control purposes. This paper is to provide a review on application of nonlinear reduction techniques in chemical processes. The advantages and disadvantages of each technique reviewed are also highlighted

  11. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  12. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  13. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  14. Statistical Mechanics of a Simplified Bipartite Matching Problem: An Analytical Treatment

    Science.gov (United States)

    Dell'Erba, Matías Germán

    2012-03-01

    We perform an analytical study of a simplified bipartite matching problem in which there exists a constant matching energy, and both heterosexual and homosexual pairings are allowed. We obtain the partition function in a closed analytical form and we calculate the corresponding thermodynamic functions of this model. We conclude that the model is favored at high temperatures, for which the probabilities of heterosexual and homosexual pairs tend to become equal. In the limits of low and high temperatures, the system is extensive, however this property is lost in the general case. There exists a relation between the matching energies for which the system becomes more stable under external (thermal) perturbations. As the difference of energies between the two possible matches increases the system becomes more ordered, while the maximum of entropy is achieved when these energies are equal. In this limit, there is a first order phase transition between two phases with constant entropy.

  15. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  16. Multi data reservior history matching and uncertainty quantification framework

    KAUST Repository

    Katterbauer, Klemens

    2015-11-26

    A multi-data reservoir history matching and uncertainty quantification framework is provided. The framework can utilize multiple data sets such as production, seismic, electromagnetic, gravimetric and surface deformation data for improving the history matching process. The framework can consist of a geological model that is interfaced with a reservoir simulator. The reservoir simulator can interface with seismic, electromagnetic, gravimetric and surface deformation modules to predict the corresponding observations. The observations can then be incorporated into a recursive filter that subsequently updates the model state and parameters distributions, providing a general framework to quantify and eventually reduce with the data, uncertainty in the estimated reservoir state and parameters.

  17. Quantity precommitment and price matching

    DEFF Research Database (Denmark)

    Tumennasan, Norovsambuu

    We revisit the question of whether price matching is anti-competitive in a capacity constrained duopoly setting. We show that the effect of price matching depends on capacity. Specifically, price matching has no effect when capacity is relatively low, but it benefits the firms when capacity...... is relatively high. Interestingly, when capacity is in an intermediate range, price matching benefits only the small firm but does not affect the large firm in any way. Therefore, one has to consider capacity seriously when evaluating if price matching is anti-competitive. If the firms choose their capacities...... simultaneously before pricing decisions, then the effect of price matching is either pro-competitive or ambiguous. We show that if the cost of capacity is high, then price matching can only (weakly) decrease the market price. On the other hand, if the cost of capacity is low, then the effect of price matching...

  18. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  19. Kolmogorov and Zabih’s Graph Cuts Stereo Matching Algorithm

    Directory of Open Access Journals (Sweden)

    Vladimir Kolmogorov

    2014-10-01

    Full Text Available Binocular stereovision estimates the three-dimensional shape of a scene from two photographs taken from different points of view. In rectified epipolar geometry, this is equivalent to a matching problem. This article describes a method proposed by Kolmogorov and Zabih in 2001, which puts forward an energy-based formulation. The aim is to minimize a four-term-energy. This energy is not convex and cannot be minimized except among a class of perturbations called expansion moves, in which case an exact minimization can be done with graph cuts techniques. One noteworthy feature of this method is that it handles occlusion: The algorithm detects points that cannot be matched with any point in the other image. In this method displacements are pixel accurate (no subpixel refinement.

  20. Matching tomographic IMRT fields with static photon fields

    International Nuclear Information System (INIS)

    Sethi, A.; Leybovich, L.; Dogan, N.; Emami, B.

    2001-01-01

    The matching of abutting radiation fields presents a challenging problem in radiation therapy. Due to sharp penumbra of linear accelerator beams, small (1-2 mm) errors in field positioning can lead to large (>30%) hot or cold spots in the abutment region. With head and neck immobilization devices (thermoplastic mask/aquaplast) an average setup error of 3 mm has been reported. Therefore hot or cold spots approaching 50% of the prescription dose may occur along the matchline. Although abutting radiation fields have been investigated for static fields, there is no reported study regarding matching of tomographic IMRT and static fields. Compared to static fields, the matching of tomographic IMRT fields with static fields is more complicated. Since IMRT and static fields are planned on separate treatment planning computers, the dose in the abutment region is not specified. In addition, commonly used techniques for matching fields, such as feathering of junctions, are not practical. We have developed a method that substantially reduces dose inhomogeneity in the abutment region. In this method, a 'buffer zone' around the matchline was created and was included as part of the target for both IMRT and static field plans. In both fields, a small dose gradient (≤3%/mm) in the buffer zone was created. In the IMRT plan, the buffer zone was divided into three sections with dose varying from 83% to 25% of prescription dose. The static field dose profile was modified using either a specially designed physical (hard) or a dynamic (soft) wedge. When these modified fields were matched, the combined dose in the abutment region varied by ≤10% in the presence of setup errors spanning 4 mm (±2 mm) when the hard wedge was used and 10 mm (±5 mm) with the soft wedge

  1. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  2. The Interaction Between Schema Matching and Record Matching in Data Integration

    KAUST Repository

    Gu, Binbin; Li, Zhixu; Zhang, Xiangliang; Liu, An; Liu, Guanfeng; Zheng, Kai; Zhao, Lei; Zhou, Xiaofang

    2016-01-01

    Schema Matching (SM) and Record Matching (RM) are two necessary steps in integrating multiple relational tables of different schemas, where SM unifies the schemas and RM detects records referring to the same real-world entity. The two processes have

  3. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  4. The choice of a 'Best' assisted history matching algorithm

    NARCIS (Netherlands)

    Hanea, R.G.; Przybysz-Jarnut, J.K.; Krymskaya, M.V.; Heemink, A.W.; Jansen, J.D.

    2010-01-01

    Computer-assisted history matching is the act of systematicalty adjusting a ‘prior’ reservoir model using measured data until its simulated production response closely reproduces the past behavior of the reservoir. Thereafler, the updated, ‘posterior’, model is expected to predict future reservoir

  5. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Science.gov (United States)

    2013-07-15

    ... 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match No. 18 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION... Act of 1974, as amended, this notice announces the establishment of a CMP that CMS plans to conduct...

  6. The use of neural networks on the production history matching process; Uso de redes neurais no processo de ajuste de historico de producao

    Energy Technology Data Exchange (ETDEWEB)

    Maschio, Celio; Nakajima, Lincoln; Schiozer, Denis J. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil)

    2008-07-01

    The purpose of this work is to present a methodology for production history matching using proxy models generated through artificial neural networks. Optimization processes through genetic algorithm using the proxy models and the flux simulator area compared. The methodology was tested in three reservoir models with 4, 8 and 16 variables, and one realistic synthetic model with 20 parameters, in order to evaluate the performance of the technique with the increasing of the number of variable. The results obtained with the proxy models are very similar compared to the results obtained with the simulator, showing as main advantage the reduction of the number of simulations allowed by the proposed methodology. (author)

  7. Word-level recognition of multifont Arabic text using a feature vector matching approach

    Science.gov (United States)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  8. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    Science.gov (United States)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  9. Inhibitory mechanism of the matching heuristic in syllogistic reasoning.

    Science.gov (United States)

    Tse, Ping Ping; Moreno Ríos, Sergio; García-Madruga, Juan Antonio; Bajo Molina, María Teresa

    2014-11-01

    A number of heuristic-based hypotheses have been proposed to explain how people solve syllogisms with automatic processes. In particular, the matching heuristic employs the congruency of the quantifiers in a syllogism—by matching the quantifier of the conclusion with those of the two premises. When the heuristic leads to an invalid conclusion, successful solving of these conflict problems requires the inhibition of automatic heuristic processing. Accordingly, if the automatic processing were based on processing the set of quantifiers, no semantic contents would be inhibited. The mental model theory, however, suggests that people reason using mental models, which always involves semantic processing. Therefore, whatever inhibition occurs in the processing implies the inhibition of the semantic contents. We manipulated the validity of the syllogism and the congruency of the quantifier of its conclusion with those of the two premises according to the matching heuristic. A subsequent lexical decision task (LDT) with related words in the conclusion was used to test any inhibition of the semantic contents after each syllogistic evaluation trial. In the LDT, the facilitation effect of semantic priming diminished after correctly solved conflict syllogisms (match-invalid or mismatch-valid), but was intact after no-conflict syllogisms. The results suggest the involvement of an inhibitory mechanism of semantic contents in syllogistic reasoning when there is a conflict between the output of the syntactic heuristic and actual validity. Our results do not support a uniquely syntactic process of syllogistic reasoning but fit with the predictions based on mental model theory. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Supporting Opportunities for Context-Aware Social Matching: An Experience Sampling Study

    DEFF Research Database (Denmark)

    Mayer, Julia; Barkhuus, Louise; Hiltz, Starr Roxanne

    2016-01-01

    Mobile social matching systems aim to bring people together in the physical world by recommending people nearby to each other. Going beyond simple similarity and proximity matching mechanisms, we explore a proposed framework of relational, social and personal context as predictors of match...... opportunities to map out the design space of opportunistic social matching systems. We contribute insights gained from a study combining Experience Sampling Method (ESM) with 85 students of a U.S. university and interviews with 15 of these participants. A generalized linear mixed model analysis (n=1704) showed...... that personal context (mood and busyness) as well as sociability of others nearby are the strongest predictors of contextual match interest. Participant interviews suggest operationalizing relational context using social network rarity and discoverable rarity, and incorporating skill level and learning...

  11. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Science.gov (United States)

    2013-08-07

    ... 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match No. 12 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice... of 1974, as amended, this notice establishes a CMP that CMS plans to conduct with the Department of...

  12. Generating Models of a Matched Formula with a Polynomial Delay

    Czech Academy of Sciences Publication Activity Database

    Savický, Petr; Kučera, P.

    2016-01-01

    Roč. 56, č. 6 (2016), s. 379-402 ISSN 1076-9757 R&D Projects: GA ČR GBP202/12/G061 Grant - others:GA ČR(CZ) GA15-15511S Institutional support: RVO:67985807 Keywords : conjunctive normal form * matched formula * pure literal satisfiable formula Subject RIV: BA - General Mathematics Impact factor: 2.284, year: 2016

  13. Neural network based pattern matching and spike detection tools and services--in the CARMEN neuroinformatics project.

    Science.gov (United States)

    Fletcher, Martyn; Liang, Bojian; Smith, Leslie; Knowles, Alastair; Jackson, Tom; Jessop, Mark; Austin, Jim

    2008-10-01

    In the study of information flow in the nervous system, component processes can be investigated using a range of electrophysiological and imaging techniques. Although data is difficult and expensive to produce, it is rarely shared and collaboratively exploited. The Code Analysis, Repository and Modelling for e-Neuroscience (CARMEN) project addresses this challenge through the provision of a virtual neuroscience laboratory: an infrastructure for sharing data, tools and services. Central to the CARMEN concept are federated CARMEN nodes, which provide: data and metadata storage, new, thirdparty and legacy services, and tools. In this paper, we describe the CARMEN project as well as the node infrastructure and an associated thick client tool for pattern visualisation and searching, the Signal Data Explorer (SDE). We also discuss new spike detection methods, which are central to the services provided by CARMEN. The SDE is a client application which can be used to explore data in the CARMEN repository, providing data visualization, signal processing and a pattern matching capability. It performs extremely fast pattern matching and can be used to search for complex conditions composed of many different patterns across the large datasets that are typical in neuroinformatics. Searches can also be constrained by specifying text based metadata filters. Spike detection services which use wavelet and morphology techniques are discussed, and have been shown to outperform traditional thresholding and template based systems. A number of different spike detection and sorting techniques will be deployed as services within the CARMEN infrastructure, to allow users to benchmark their performance against a wide range of reference datasets.

  14. Calculating scattering matrices by wave function matching

    International Nuclear Information System (INIS)

    Zwierzycki, M.; Khomyakov, P.A.; Starikov, A.A.; Talanana, M.; Xu, P.X.; Karpan, V.M.; Marushchenko, I.; Brocks, G.; Kelly, P.J.; Xia, K.; Turek, I.; Bauer, G.E.W.

    2008-01-01

    The conductance of nanoscale structures can be conveniently related to their scattering properties expressed in terms of transmission and reflection coefficients. Wave function matching (WFM) is a transparent technique for calculating transmission and reflection matrices for any Hamiltonian that can be represented in tight-binding form. A first-principles Kohn-Sham Hamiltonian represented on a localized orbital basis or on a real space grid has such a form. WFM is based upon direct matching of the scattering-region wave function to the Bloch modes of ideal leads used to probe the scattering region. The purpose of this paper is to give a pedagogical introduction to WFM and present some illustrative examples of its use in practice. We briefly discuss WFM for calculating the conductance of atomic wires, using a real space grid implementation. A tight-binding muffin-tin orbital implementation very suitable for studying spin-dependent transport in layered magnetic materials is illustrated by looking at spin-dependent transmission through ideal and disordered interfaces. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  16. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  17. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  18. Presentation Technique

    International Nuclear Information System (INIS)

    Froejmark, M.

    1992-10-01

    The report presents a wide, easily understandable description of presentation technique and man-machine communication. General fundamentals for the man-machine interface are illustrated, and the factors that affect the interface are described. A model is presented for describing the operators work situation, based on three different levels in the operators behaviour. The operator reacts routinely in the face of simple, known problems, and reacts in accordance with predetermined plans in the face of more complex, recognizable problems. Deep fundamental knowledge is necessary for truly complex questions. Today's technical status and future development have been studied. In the future, the operator interface will be based on standard software. Functions such as zooming, integration of video pictures, and sound reproduction will become common. Video walls may be expected to come into use in situations in which several persons simultaneously need access to the same information. A summary of the fundamental rules for the design of good picture ergonomics and design requirements for control rooms are included in the report. In conclusion, the report describes a presentation technique within the Distribution Automation and Demand Side Management area and analyses the know-how requirements within Vattenfall. If different systems are integrated, such as geographical information systems and operation monitoring systems, strict demands are made on the expertise of the users for achieving a user-friendly technique which is matched to the needs of the human being. (3 figs.)

  19. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  20. WE-AB-303-01: FEATURED PRESENTATION: A Dual-Detector Phase-Matched Digital Tomosynthesis (DTS) Imaging Scheme Using Aggregated KV and MV Projections for Intra-Treatment Lung Tumor Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y; Yin, F; Mao, R; Gao, R; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: To develop a dual-detector phase-matched DTS technique for continuous and fast intra-treatment lung tumor localization. Methods: Tumor localization accuracy of limited-angle DTS imaging is affected by low inter-slice resolution. The dual-detector DTS technique aims to overcome this limitation through combining orthogonally acquired beam’s eye view MV projections and kV projections for intra-treatment DTS reconstruction and localization. To aggregate the kV and MV projections for reconstruction, the MV projections were linearly converted to synthesize corresponding kV projections. To further address the lung motion induced localization errors, this technique uses respiratory phase-matching to match the motion information between on-board DTS and reference DTS to offset the adverse effects of motion blurriness in tumor localization.A study was performed using the CIRS008A lung phantom to simulate different on-board target variation scenarios for localization. The intra-treatment kV and MV acquisition was achieved through the Varian TrueBeam Developer Mode. Four methods were compared for their localization accuracy: 1. the proposed dual-detector phase-matched DTS technique; 2. the single-detector phase-matched DTS technique; 3. the dual-detector 3D-DTS technique without phase-matching; and 4. the single-detector 3D-DTS technique without phase-matching. Results: For scan angles of 2.5°, 5°, 10°, 20° and 30°, the dual-detector phase-matched DTS technique localized the tumor with average(±standard deviations) errors of 0.4±0.3 mm, 0.5±0.3 mm, 0.6±0.2 mm, 0.9±0.4 mm and 1.0±0.3 mm, respectively. The corresponding values of single-detector phase-matched DTS technique were 4.0±2.5 mm, 2.7±1.1 mm, 1.7±1.2 mm, 2.2±0.9 mm and 1.5±0.8 mm, respectively. The values of dual-detector 3D-DTS technique were 6.2±1.7 mm, 6.3±1.2 mm, 5.3±1.3 mm, 2.0±2.2 mm and 1.5±0.5 mm, respectively. And the values of single-detector 3D-DTS technique were 9.7±8.9 mm, 9

  1. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  2. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  3. Fingerprint recognition system by use of graph matching

    Science.gov (United States)

    Shen, Wei; Shen, Jun; Zheng, Huicheng

    2001-09-01

    Fingerprint recognition is an important subject in biometrics to identify or verify persons by physiological characteristics, and has found wide applications in different domains. In the present paper, we present a finger recognition system that combines singular points and structures. The principal steps of processing in our system are: preprocessing and ridge segmentation, singular point extraction and selection, graph representation, and finger recognition by graphs matching. Our fingerprint recognition system is implemented and tested for many fingerprint images and the experimental result are satisfactory. Different techniques are used in our system, such as fast calculation of orientation field, local fuzzy dynamical thresholding, algebraic analysis of connections and fingerprints representation and matching by graphs. Wed find that for fingerprint database that is not very large, the recognition rate is very high even without using a prior coarse category classification. This system works well for both one-to-few and one-to-many problems.

  4. A knowledge based approach to matching human neurodegenerative disease and animal models

    Directory of Open Access Journals (Sweden)

    Maryann E Martone

    2013-05-01

    Full Text Available Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology and an associated Phenotype Knowledge Base using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework and qualities are drawn from the Phenotype and Trait Ontology. We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  5. Vision sensing techniques in aeronautics and astronautics

    Science.gov (United States)

    Hall, E. L.

    1988-01-01

    The close relationship between sensing and other tasks in orbital space, and the integral role of vision sensing in practical aerospace applications, are illustrated. Typical space mission-vision tasks encompass the docking of space vehicles, the detection of unexpected objects, the diagnosis of spacecraft damage, and the inspection of critical spacecraft components. Attention is presently given to image functions, the 'windowing' of a view, the number of cameras required for inspection tasks, the choice of incoherent or coherent (laser) illumination, three-dimensional-to-two-dimensional model-matching, edge- and region-segmentation techniques, and motion analysis for tracking.

  6. Iris recognition using possibilistic fuzzy matching on local features.

    Science.gov (United States)

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  7. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  8. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  9. A patch-based method for the evaluation of dense image matching quality

    NARCIS (Netherlands)

    Zhang, Zhenchao; Gerke, Markus; Vosselman, George; Yang, Michael Ying

    2018-01-01

    Airborne laser scanning and photogrammetry are two main techniques to obtain 3D data representing the object surface. Due to the high cost of laser scanning, we want to explore the potential of using point clouds derived by dense image matching (DIM), as effective alternatives to laser scanning

  10. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    Science.gov (United States)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  11. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  12. Multi-data reservoir history matching for enhanced reservoir forecasting and uncertainty quantification

    KAUST Repository

    Katterbauer, Klemens

    2015-04-01

    Reservoir simulations and history matching are critical for fine-tuning reservoir production strategies, improving understanding of the subsurface formation, and forecasting remaining reserves. Production data have long been incorporated for adjusting reservoir parameters. However, the sparse spatial sampling of this data set has posed a significant challenge for efficiently reducing uncertainty of reservoir parameters. Seismic, electromagnetic, gravity and InSAR techniques have found widespread applications in enhancing exploration for oil and gas and monitoring reservoirs. These data have however been interpreted and analyzed mostly separately, rarely exploiting the synergy effects that could result from combining them. We present a multi-data ensemble Kalman filter-based history matching framework for the simultaneous incorporation of various reservoir data such as seismic, electromagnetics, gravimetry and InSAR for best possible characterization of the reservoir formation. We apply an ensemble-based sensitivity method to evaluate the impact of each observation on the estimated reservoir parameters. Numerical experiments for different test cases demonstrate considerable matching enhancements when integrating all data sets in the history matching process. Results from the sensitivity analysis further suggest that electromagnetic data exhibit the strongest impact on the matching enhancements due to their strong differentiation between water fronts and hydrocarbons in the test cases.

  13. Determine Conjugate Points of an Aerial Photograph Stereopairs using Seperate Channel Mean Value Technique

    Directory of Open Access Journals (Sweden)

    Andri Hernandi

    2009-11-01

    Full Text Available In the development of digital photogrammetric system, automatic image matching process play an important role. The automatic image matching is used in finding the conjugate points of an aerial photograph stereopair automatically. This matching technique gives quite significant contribution especially in the development of 3D photogrammetry in an attempt to get the exact and precise topographic information during the stereo restitution. There are two image matching methods that have been so far developed, i.e. the area based system for gray level environment and the feature based system for natural feature environment. This re¬search is trying to implement the area based matching with normalized cross correlation technique to get the correlation coefficient between the spectral value of the left image and its pair on the right. Based on the previous researches, the use of color image could increase the quality of match-ing. One of the color image matching technique is known as Separate Channel Mean Value. In order to be able to see the performance of the technique, a number of sampling areas with various different characteristics have been chosen, i.e. the heterogeneous, homogeneous, texture, shadow, and contrast. The result shows the highest similarity measure is obtained on heterogeneous sample area at size of all reference and search image, i.e. (11 pixels x 11 pixels and (23 pixels x 23 pixels. In these area the correlation coefficient reached more than 0.7 and the highest percentage of similarity measure is obtained. The average of total similarity measure of conjugate images in the sampling image area only reach about 41.43 % of success. Therefore, this technique has a weakness and some treatment to overcome the problems is still needed.

  14. Inverse modeling as a step in the calibration of the LBL-USGS site-scale model of Yucca Mountain

    International Nuclear Information System (INIS)

    Finsterle, S.; Bodvarsson, G.S.; Chen, G.

    1995-05-01

    Calibration of the LBL-USGS site-scale model of Yucca Mountain is initiated. Inverse modeling techniques are used to match the results of simplified submodels to the observed pressure, saturation, and temperature data. Hydrologic and thermal parameters are determined and compared to the values obtained from laboratory measurements and conventional field test analysis

  15. Spectral matching research for light-emitting diode-based neonatal jaundice therapeutic device light source

    Science.gov (United States)

    Gan, Ruting; Guo, Zhenning; Lin, Jieben

    2015-09-01

    To decrease the risk of bilirubin encephalopathy and minimize the need for exchange transfusions, we report a novel design for light source of light-emitting diode (LED)-based neonatal jaundice therapeutic device (NJTD). The bilirubin absorption spectrum in vivo was regarded as target. Based on spectral constructing theory, we used commercially available LEDs with different peak wavelengths and full width at half maximum as matching light sources. Simple genetic algorithm was first proposed as the spectral matching method. The required LEDs number at each peak wavelength was calculated, and then, the commercial light source sample model of the device was fabricated to confirm the spectral matching technology. In addition, the corresponding spectrum was measured and the effect was analyzed finally. The results showed that fitted spectrum was very similar to the target spectrum with 98.86 % matching degree, and the actual device model has a spectrum close to the target with 96.02 % matching degree. With higher fitting degree and efficiency, this matching algorithm is very suitable for light source matching technology of LED-based spectral distribution, and bilirubin absorption spectrum in vivo will be auspicious candidate for the target spectrum of new LED-based NJTD light source.

  16. Best Practices for NPT Transit Matching

    International Nuclear Information System (INIS)

    Gilligan, Kimberly V.; Whitaker, J. Michael; Oakberg, John A.; Snow, Catherine

    2016-01-01

    Transit matching is the process for relating or matching reports of shipments and receipts submitted to the International Atomic Energy Agency (IAEA). Transit matching is a component used by the IAEA in drawing safeguards conclusions and performing investigative analysis. Transit matching is part of IAEA safeguards activities and the State evaluation process, and it is included in the annual Safeguards Implementation Report (SIR). Annually, the IAEA currently receives reports of ~900,000 nuclear material transactions, of which ~500,000 are for domestic and foreign transfers. Of these the IAEA software can automatically match (i.e., machine match) about 95% of the domestic transfers and 25% of the foreign transfers. Given the increasing demands upon IAEA resources, it is highly desirable for the machine-matching process to match as many transfers as possible. Researchers at Oak Ridge National Laboratory (ORNL) have conducted an investigation funded by the National Nuclear Security Administration through the Next Generation Safeguards Initiative to identify opportunities to strengthen IAEA transit matching. Successful matching, and more specifically machine matching, is contingent on quality data from the reporting States. In February 2016, ORNL hosted representatives from three States, the IAEA, and Euratom to share results from past studies and to discuss the processes, policies, and procedures associated with State reporting for transit matching. Drawing on each entity's experience and knowledge, ORNL developed a best practices document to be shared with the international safeguards community to strengthen transit matching. This paper shares the recommendations that resulted from this strategic meeting and the next steps being taken to strengthen transit matching.

  17. Best Practices for NPT Transit Matching

    Energy Technology Data Exchange (ETDEWEB)

    Gilligan, Kimberly V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Whitaker, J. Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oakberg, John A. [Tetra Tech, Inc., Oak Ridge, TN (United States); Snow, Catherine [Sno Consulting, LLC, Sandy, UT (United States)

    2016-09-01

    Transit matching is the process for relating or matching reports of shipments and receipts submitted to the International Atomic Energy Agency (IAEA). Transit matching is a component used by the IAEA in drawing safeguards conclusions and performing investigative analysis. Transit matching is part of IAEA safeguards activities and the State evaluation process, and it is included in the annual Safeguards Implementation Report (SIR). Annually, the IAEA currently receives reports of ~900,000 nuclear material transactions, of which ~500,000 are for domestic and foreign transfers. Of these the IAEA software can automatically match (i.e., machine match) about 95% of the domestic transfers and 25% of the foreign transfers. Given the increasing demands upon IAEA resources, it is highly desirable for the machine-matching process to match as many transfers as possible. Researchers at Oak Ridge National Laboratory (ORNL) have conducted an investigation funded by the National Nuclear Security Administration through the Next Generation Safeguards Initiative to identify opportunities to strengthen IAEA transit matching. Successful matching, and more specifically machine matching, is contingent on quality data from the reporting States. In February 2016, ORNL hosted representatives from three States, the IAEA, and Euratom to share results from past studies and to discuss the processes, policies, and procedures associated with State reporting for transit matching. Drawing on each entity's experience and knowledge, ORNL developed a best practices document to be shared with the international safeguards community to strengthen transit matching. This paper shares the recommendations that resulted from this strategic meeting and the next steps being taken to strengthen transit matching.

  18. Automatic Epileptic Seizure Onset Detection Using Matching Pursuit

    DEFF Research Database (Denmark)

    Sorensen, Thomas Lynggaard; Olsen, Ulrich L.; Conradsen, Isa

    2010-01-01

    . The combination of Matching Pursuit and SVM for automatic seizure detection has never been tested before, making this a pilot study. Data from red different patients with 6 to 49 seizures are used to test our model. Three patients are recorded with scalp electroencephalography (sEEG) and three with intracranial...... electroencephalography (iEEG). A sensitivity of 78-100% and a detection latency of 5-18s has been achieved, while holding the false detection at 0.16-5.31/h. Our results show the potential of Matching Pursuit as a feature xtractor for detection of epileptic seizures....

  19. MatchingTools: A Python library for symbolic effective field theory calculations

    Science.gov (United States)

    Criado, Juan C.

    2018-06-01

    MatchingTools is a Python library for doing symbolic calculations in effective field theory. It provides the tools to construct general models by defining their field content and their interaction Lagrangian. Once a model is given, the heavy particles can be integrated out at the tree level to obtain an effective Lagrangian in which only the light particles appear. After integration, some of the terms of the resulting Lagrangian might not be independent. MatchingTools contains functions for transforming these terms to rewrite them in terms of any chosen set of operators.

  20. Evaluation of the hydrological cycle of MATCH driven by NCEP reanalysis data: comparison with GOME water vapor measurements

    Directory of Open Access Journals (Sweden)

    R. Lang

    2005-01-01

    Full Text Available This study examines two key parameters of the hydrological cycle, water vapor (WV and precipitation rates (PR, as modelled by the chemistry transport model MATCH (Model of Atmospheric Transport and Chemistry driven by National Centers for Environmental Prediction (NCEP reanalysis data (NRA. For model output evaluation we primarily employ WV total column data from the Global Ozone Monitoring Experiment (GOME on ERS-2, which is the only instrument capable measuring WV on a global scale and over all surface types with a substantial data record from 1995 to the present. We find that MATCH and NRA WV and PR distributions are closely related, but that significant regional differences in both parameters exist in magnitude and distribution patterns when compared to the observations. We also find that WV residual patterns between model and observations show remarkable similarities to residuals observed in the PR when comparing MATCH and NRA output to observations comprised by the Global Precipitation Climatology Project (GPCP. We conclude that deficiencies in model parameters shared by MATCH and NRA, like in the surface evaporation rates and regional transport patterns, are likely to lead to the observed differences. Monthly average regional differences between MATCH modelled WV columns and the observations can be as large as 2 cm, based on the analysis of three years. Differences in the global mean WV values are, however, below 0.1 cm. Regional differences in the PR between MATCH and GPCP can be above 0.5 cm per day and MATCH computes on average a higher PR than what has been observed. The lower water vapor content of MATCH is related to shorter model WV residence times by up to 1 day as compared to the observations. We find that MATCH has problems in modelling the WV content in regions of strong upward convection like, for example, along the Inter Tropical Convergence Zone, where it appears to be generally too dry as compared to the observations. We

  1. The abrasive blasting technique. Matching the waste minimisation precept

    International Nuclear Information System (INIS)

    Welbers, Philipp; Noll, Thomas; Braehler, Georg; Sohnius, Bern

    2010-01-01

    Nowadays main challenges in the nuclear industry are, besides the development and design of new facilities, the dismantling of outlived nuclear installations and subsequent waste handling. Not only Germany but all countries and institutions which are involved in our business face similar problems: A large quantity of slightly contaminated waste, equipment and civil structures, arise inevitably during operation and, especially, during dismantling. This waste occurs in a huge amount due to its bulky nature, e.g. pipe-work. Storage of bulky items is very expensive and would not be compatible with the waste minimisation precept. Treatment in an ecological correct and economical beneficial way is the key factor in dealing with this waste. This means decontamination of the waste up to clearance levels where possible. A suitable solution is the Abrasive Blasting Technique. (orig.)

  2. Prism-coupled Cherenkov phase-matched terahertz wave generation using a DAST crystal.

    Science.gov (United States)

    Suizu, Koji; Shibuya, Takayuki; Uchida, Hirohisa; Kawase, Kodo

    2010-02-15

    Terahertz (THz) wave generation based on nonlinear frequency conversion is a promising method for realizing a tunable monochromatic high-power THz-wave source. Unfortunately, many nonlinear crystals have strong absorption in the THz frequency region. This limits efficient and widely tunable THz-wave generation. The Cherenkov phase-matching method is one of the most promising techniques for overcoming these problems. Here, we propose a prism-coupled Cherenkov phase-matching (PCC-PM) method, in which a prism with a suitable refractive index at THz frequencies is coupled to a nonlinear crystal. This has the following advantages. Many crystals can be used as THz-wave emitters; the phase-matching condition inside the crystal does not have to be observed; the absorption of the crystal does not prevent efficient generation of radiation; and pump sources with arbitrary wavelengths can be employed. Here we demonstrate PCC-PM THz-wave generation using the organic crystal 4-dimethylamino-N-metyl-4-stilbazolium tosylate (DAST) and a Si prism coupler. We obtain THz-wave radiation with tunability of approximately 0.1 to 10 THz and with no deep absorption features resulting from the absorption spectrum of the crystal. The obtained spectra did not depend on the pump wavelength in the range 1300 to 1450 nm. This simple technique shows promise for generating THz radiation using a wide variety of nonlinear crystals.

  3. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    Science.gov (United States)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability

  4. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  5. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  6. Privacy‐Preserving Friend Matching Protocol approach for Pre‐match in Social Networks

    DEFF Research Database (Denmark)

    Ople, Shubhangi S.; Deshmukh, Aaradhana A.; Mihovska, Albena Dimitrova

    2016-01-01

    Social services make the most use of the user profile matching to help the users to discover friends with similar social attributes (e.g. interests, location, age). However, there are many privacy concerns that prevent to enable this functionality. Privacy preserving encryption is not suitable...... for use in social networks due to its data sharing problems and information leakage. In this paper, we propose a novel framework for privacy–preserving profile matching. We implement both the client and server portion of the secure match and evaluate its performance network dataset. The results show...

  7. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  8. Thermal-depth matching in dynamic scene based on affine projection and feature registration

    Science.gov (United States)

    Wang, Hongyu; Jia, Tong; Wu, Chengdong; Li, Yongqiang

    2018-03-01

    This paper aims to study the construction of 3D temperature distribution reconstruction system based on depth and thermal infrared information. Initially, a traditional calibration method cannot be directly used, because the depth and thermal infrared camera is not sensitive to the color calibration board. Therefore, this paper aims to design a depth and thermal infrared camera calibration board to complete the calibration of the depth and thermal infrared camera. Meanwhile a local feature descriptors in thermal and depth images is proposed. The belief propagation matching algorithm is also investigated based on the space affine transformation matching and local feature matching. The 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model, and has strong robustness.

  9. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  10. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  11. History Matching of 4D Seismic Data Attributes using the Ensemble Kalman Filter

    KAUST Repository

    Ravanelli, Fabio M.

    2013-01-01

    . This problem is tackled by the conditioning of the model with production data through data assimilation. This process is known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably

  12. SU-E-T-622: Identification and Improvement of Patients Eligible for Dose Escalation with Matched Plans

    International Nuclear Information System (INIS)

    Bush, K; Holcombe, C; Kapp, D; Buyyounouski, M; Hancock, S; Xing, L; Atwood, T; King, M

    2014-01-01

    Purpose: Radiation-therapy dose-escalation beyond 80Gy may improve tumor control rates for patients with localized prostate cancer. Since toxicity remains a concern, treatment planners must achieve dose-escalation while still adhering to dose-constraints for surrounding structures. Patientmatching is a machine-learning technique that identifies prior patients that dosimetrically match DVH parameters of target volumes and critical structures prior to actual treatment planning. We evaluated the feasibility of patient-matching in (1)identifying candidates for safe dose-escalation; and (2)improving DVH parameters for critical structures in actual dose-escalated plans. Methods: We analyzed DVH parameters from 319 historical treatment plans to determine which plans could achieve dose-escalation (8640cGy) without exceeding Zelefsky dose-constraints (rectal and bladder V47Gy<53%, and V75.6Gy<30%, max-point dose to rectum of 8550cGy, max dose to PTV< 9504cGy). We then estimated the percentage of cases that could achieve safe dose-escalation using software that enables patient matching (QuickMatch, Siris Medical, Mountain View, CA). We then replanned a case that had violated DVH constraints with DVH parameters from patient matching, in order to determine whether this previously unacceptable plan could be made eligible with this automated technique. Results: Patient-matching improved the percentage of patients eligible for dose-escalation from 40% to 63% (p=4.7e-4, t-test). Using a commercial optimizer augmented with patient-matching, we demonstrated a case where patient-matching improved the toxicity-profile such that dose-escalation would have been possible; this plan was rapidly achieved using patientmatching software. In this patient, all lower-dose constraints were met with both the denovo and patient-matching plan. In the patient-matching plan, maximum dose to the rectum was 8385cGy, while the denovo plan failed to meet the maximum rectal constraint at 8571c

  13. Estimation of Initial Position Using Line Segment Matching in Maps

    Directory of Open Access Journals (Sweden)

    Chongyang Wei

    2016-06-01

    Full Text Available While navigating in a typical traffic scene, with a drastic drift or sudden jump in its Global Positioning System (GPS position, the localization based on such an initial position is unable to extract precise overlapping data from the prior map in order to match the current data, thus rendering the localization as unfeasible. In this paper, we first propose a new method to estimate an initial position by matching the infrared reflectivity maps. The maps consist of a highly precise prior map, built with the offline simultaneous localization and mapping (SLAM technique, and a smooth current map, built with the integral over velocities. Considering the attributes of the maps, we first propose to exploit the stable, rich line segments to match the lidar maps. To evaluate the consistency of the candidate line pairs in both maps, we propose to adopt the local appearance, pairwise geometric attribute and structural likelihood to construct an affinity graph, as well as employ a spectral algorithm to solve the graph efficiently. The initial position is obtained according to the relationship between the vehicle's current position and matched lines. Experiments on the campus with a GPS error of dozens of metres show that our algorithm can provide an accurate initial value with average longitudinal and lateral errors being 1.68m and 1.04m, respectively.

  14. The Brown-Servranckx matching transformer for simultaneous RFQ to DTL H+ and H- matching

    International Nuclear Information System (INIS)

    Wadlinger, E.A.; Garnett, R.W.

    1996-01-01

    The issue involved in the simultaneous matching of H + and H - beams between an RFQ and DTL lies in the fact that both beams experience the same electric-field forces at a given position in the RFQ. Hence, the two beams are focused to the same correlation. However, matching to a DTL requires correlation of the opposite sign. The Brown-Servranckx quarter-wave (λ / 4) matching transformer system, which requires four quadrupoles, provides a method to simultaneously match H + and H - beams between an RFQ and a DTL. The method requires the use of a special RFQ section to obtain the Twiss parameter conditions β x = β y and α x = α y = 0 at the exit of the RFQ. This matching between the RFQ and DTL is described. (author)

  15. The Brown-Servranckx matching transformer for simultaneous RFQ to DTL H+ and H- matching

    International Nuclear Information System (INIS)

    Wadlinger, E.A.; Garnett, R.W.

    1996-01-01

    The issue involved in simultaneous matching of H + and H - beams between an RFQ and DTL lies in the fact that both beams experience the same electric-field forces at a given position in the RFQ. Hence, the two beams are focused to the same correlation. However, matching to a DTL requires correlation of the opposite sign. The Brown-Servranckx quarter-wave (λ/4) matching transformer system, which requires four quadrupoles, provides a method to simultaneously match H + and H - beams between an RFQ and a DTL. The method requires the use of a special RFQ section to obtain the Twiss parameter conditions β x =β y and α x =α y =0 at the exit of the RFQ. This matching between the RFQ and DTL is described

  16. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  17. Comparison of self-written waveguide techniques and bulk index matching for low-loss polymer waveguide interconnects

    Science.gov (United States)

    Burrell, Derek; Middlebrook, Christopher

    2016-03-01

    Polymer waveguides (PWGs) are used within photonic interconnects as inexpensive and versatile substitutes for traditional optical fibers. The PWGs are typically aligned to silica-based optical fibers for coupling. An epoxide elastomer is then applied and cured at the interface for index matching and rigid attachment. Self-written waveguides (SWWs) are proposed as an alternative to further reduce connection insertion loss (IL) and alleviate marginal misalignment issues. Elastomer material is deposited after the initial alignment, and SWWs are formed by injecting ultraviolet (UV) light into the fiber or waveguide. The coupled UV light cures a channel between the two differing structures. A suitable cladding layer can be applied after development. Such factors as longitudinal gap distance, UV cure time, input power level, polymer material selection and choice of solvent affect the resulting SWWs. Experimental data are compared between purely index-matched samples and those with SWWs at the fiber-PWG interface. It is shown that writing process. Successfully fabricated SWWs reduce overall processing time and enable an effectively continuous low-loss rigid interconnect.

  18. In vivo kinematics of healthy male knees during squat and golf swing using image-matching techniques.

    Science.gov (United States)

    Murakami, Koji; Hamai, Satoshi; Okazaki, Ken; Ikebe, Satoru; Shimoto, Takeshi; Hara, Daisuke; Mizu-uchi, Hideki; Higaki, Hidehiko; Iwamoto, Yukihide

    2016-03-01

    Participation in specific activities requires complex ranges of knee movements and activity-dependent kinematics. The purpose of this study was to investigate dynamic knee kinematics during squat and golf swing using image-matching techniques. Five healthy males performed squats and golf swings under periodic X-ray images at 10 frames per second. We analyzed the in vivo three-dimensional kinematic parameters of subjects' knees, namely the tibiofemoral flexion angle, anteroposterior (AP) translation, and internal-external rotation, using serial X-ray images and computed tomography-derived, digitally reconstructed radiographs. During squat from 0° to 140° of flexion, the femur moved about 25 mm posteriorly and rotated 19° externally relative to the tibia. Screw-home movement near extension, bicondylar rollback between 20° and 120° of flexion, and medial pivot motion at further flexion were observed. During golf swing, the leading and trailing knees (the left and right knees respectively in the right-handed golfer) showed approximately five millimeters and four millimeters of AP translation with 18° and 26° of axial rotation, respectively. A central pivot motion from set-up to top of the backswing, lateral pivot motion from top to ball impact, and medial pivot motion from impact to the end of follow-through were observed. The medial pivot motion was not always recognized during both activities, but a large range of axial rotation with bilateral condylar AP translations occurs during golf swing. This finding has important implications regarding the amount of acceptable AP translation and axial rotation at low flexion in replaced knees. IV. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Rapid Calibration of High Resolution Geologic Models to Dynamic Data Using Inverse Modeling: Field Application and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Akhil Datta-Gupta

    2008-03-31

    Streamline-based assisted and automatic history matching techniques have shown great potential in reconciling high resolution geologic models to production data. However, a major drawback of these approaches has been incompressibility or slight compressibility assumptions that have limited applications to two-phase water-oil displacements only. We propose an approach to history matching three-phase flow using a novel compressible streamline formulation and streamline-derived analytic sensitivities. First, we utilize a generalized streamline model to account for compressible flow by introducing an 'effective density' of total fluids along streamlines. Second, we analytically compute parameter sensitivities that define the relationship between the reservoir properties and the production response, viz. water-cut and gas/oil ratio (GOR). These sensitivities are an integral part of history matching, and streamline models permit efficient computation of these sensitivities through a single flow simulation. We calibrate geologic models to production data by matching the water-cut and gas/oil ratio using our previously proposed generalized travel time inversion (GTTI) technique. For field applications, however, the highly non-monotonic profile of the gas/oil ratio data often presents a challenge to this technique. In this work we present a transformation of the field production data that makes it more amenable to GTTI. Further, we generalize the approach to incorporate bottom-hole flowing pressure during three-phase history matching. We examine the practical feasibility of the method using a field-scale synthetic example (SPE-9 comparative study) and a field application. Recently Ensemble Kalman Filtering (EnKF) has gained increased attention for history matching and continuous reservoir model updating using data from permanent downhole sensors. It is a sequential Monte-Carlo approach that works with an ensemble of reservoir models. Specifically, the method

  20. Wide baseline stereo matching based on double topological relationship consistency

    Science.gov (United States)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  1. FEATURE MATCHING OF HISTORICAL IMAGES BASED ON GEOMETRY OF QUADRILATERALS

    Directory of Open Access Journals (Sweden)

    F. Maiwald

    2018-05-01

    Full Text Available This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings. It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.

  2. MARG1D: One dimensional outer region matching data code

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Watanabe, Tomoko.

    1995-08-01

    A code MARG1D has been developed which computes outer region matching data of the one dimensional Newcomb equation. Matching data play an important role in the resistive (and non ideal) Magneto-hydrodynamic (MHD) stability analysis in a tokamak plasma. The MARG1D code computes matching data by using the boundary value method or by the eigenvalue method. Variational principles are derived for the problems to be solved and a finite element method is applied. Except for the case of marginal stability, the eigenvalue method is equivalent to the boundary value method. However, the eigenvalue method has the several advantages: it is a new method of ideal MHD stability analysis for which the marginally stable state can be identified, and it guarantees numerical stability in computing matching data close to marginal stability. We perform detailed numerical experiments for a model equation with analytical solutions and for the Newcomb equation in the m=1 mode theory. Numerical experiments show that MARG1D code gives the matching data with numerical stability and high accuracy. (author)

  3. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  4. Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis

    Directory of Open Access Journals (Sweden)

    Benn eMacdonald

    2015-11-01

    Full Text Available Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs, is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.

  5. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  6. Cyclic Matching Pursuits with Multiscale Time-frequency Dictionaries

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Christensen, Mads Græsbøll

    2010-01-01

    We generalize cyclic matching pursuit (CMP), propose an orthogonal variant, and examine their performance using multiscale time-frequency dictionaries in the sparse approximation of signals. Overall, we find that the cyclic approach of CMP produces signal models that have a much lower approximation...

  7. Rotation-invariant fingerprint matching using radon and DCT

    Indian Academy of Sciences (India)

    Sangita Bharkad

    2017-11-20

    Nov 20, 2017 ... [6] Bazen A and Gerez S 2003 Fingerprint matching by thin- plate spline modeling of elastic deformations. Pattern. Recogn. 36(8): 1859–1867. [7] Jain A, Hong L and Bolle R 1997 On-line fingerprint veri- fication. IEEE Trans.

  8. Spatial competition with intermediated matching

    NARCIS (Netherlands)

    van Raalte, C.L.J.P.; Webers, H.M.

    1995-01-01

    This paper analyzes the spatial competition in commission fees between two match makers. These match makers serve as middlemen between buyers and sellers who are located uniformly on a circle. The profits of the match makers are determined by their respective market sizes. A limited willingness to

  9. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    Science.gov (United States)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  10. Modulational-instability gain bands in quasi-phase-matched materials

    International Nuclear Information System (INIS)

    Corney, J.F.; Bang, O.

    2002-01-01

    Full text: Quadratically nonlinear materials are of significant technological interest in optics because of their strong and fast cascaded nonlinearities, which are accessed most efficiently with quasi-phase-matching (QPM) techniques. We study the gain spectra of modulational instabilities (Ml) in quadratic materials where the linear and nonlinear properties are modulated with QPM gratings. The periods and intensity-dependence of the Ml can now be measured in the laboratory. Using an exact Floquet theory, we find that novel low- and high-frequency bands appear in the gain spectrum (gain versus transverse spatial frequency). The high-frequency gain bands are a general feature of gain spectra for QPM gratings. They form part of an extensive series of bands that correspond to Ml in the non-phase-matched, quickly varying components of the fields. The low-frequency bands correspond to Ml in the phase-matched DC components of the fields and are accurately predicted by a simple average theory. This theory includes the effect of the quickly varying components as induced cubic terms, which can be strong enough to suppress the low-frequency bands, in which case dark solitons and other broad beams may be effectively stable, since the high-frequency bands are typically small

  11. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  12. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  13. Do Targeted Hiring Subsidies and Profiling Techniques Reduce Unemployment?

    DEFF Research Database (Denmark)

    Jahn, Elke; Wagner, Thomas

    2008-01-01

    To reduce equilibrium unemployment targeted hiring subsidies and profiling techniques for long-term unemployed are often recommended. To analyze the effects of these two instruments, our model combines two search methods: the public employment service and random search, jobseekers choose between...... an active and a passive search strategy, while labour market policy has two options available. First, only the long-term unemployed placed by the public employment service are subsidized. Second, the subsidy is paid for each match with a long-term unemployed irrespective of the search method used. We show...

  14. Do Targeted Hiring Subsidies and Profiling Techniques Reduce Unemployment?

    DEFF Research Database (Denmark)

    Jahn, Elke; Wagner, Thomas

    To reduce equilibrium unemployment targeted hiring subsidies and profilin techniques for long-term unemployed are often recommended. To analyze the effects of these two instruments, our model combines two search methods: the public employment serviceand random search, jobseekers choose between...... an active and a passive search strategy, while labour market policy has two options available. First, only the long-term unemployed placed by the public employment service are subsidized. Second, the subsidy is paid for each match with a long-term unemployed irrespective of the search method used. We show...

  15. The Comparison of Matching Methods Using Different Measures of Balance: Benefits and Risks Exemplified within a Study to Evaluate the Effects of German Disease Management Programs on Long-Term Outcomes of Patients with Type 2 Diabetes.

    Science.gov (United States)

    Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje

    2016-10-01

    To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.

  16. On the refractive index of sodium iodide solutions for index matching in PIV

    Science.gov (United States)

    Bai, Kunlun; Katz, Joseph

    2014-04-01

    Refractive index matching has become a popular technique for facilitating applications of modern optical diagnostic techniques, such as particle image velocimetry, in complex systems. By matching the refractive index of solid boundaries with that of the liquid, unobstructed optical paths can be achieved for illumination and image acquisition. In this research note, we extend previously provided data for the refractive index of aqueous solutions of sodium iodide (NaI) for concentrations reaching the temperature-dependent solubility limit. Results are fitted onto a quadratic empirical expression relating the concentration to the refractive index. Temperature effects are also measured. The present range of indices, 1.333-1.51, covers that of typical transparent solids, from silicone elastomers to several recently introduced materials that could be manufactured using rapid prototyping. We also review briefly previous measurements of the refractive index, viscosity, and density of NaI solutions, as well as prior research that has utilized this fluid.

  17. Dehydration in the tropical tropopause layer estimated from the water vapor match

    Directory of Open Access Journals (Sweden)

    Y. Inai

    2013-09-01

    Full Text Available We apply the match technique, whereby the same air mass is observed more than once and such cases are termed a "match", to study the dehydration process associated with horizontal advection in the tropical tropopause layer (TTL over the western Pacific. The matches are obtained from profile data taken by the Soundings of Ozone and Water in the Equatorial Region (SOWER campaign network observations using isentropic trajectories calculated from European Centre for Medium-Range Weather Forecasts (ECMWF operational analyses. For the matches identified, extensive screening procedures are performed to verify the representativeness of the air parcel and the validity of the isentropic treatment, and to check for possible water injection by deep convection, consistency between the sonde data and analysis field referring to the ozone conservation. Among the matches that passed the screening tests, we identified some cases corresponding to the first quantitative value of dehydration associated with horizontal advection in the TTL. The statistical features of dehydration for the air parcels advected in the lower TTL are derived from the matches. The threshold of nucleation is estimated to be 146 ± 1% (1σ in relative humidity with respect to ice (RHice, while dehydration seems to continue until RHice reaches about 75 ± 23% (1σ in the altitude region from 350 to 360 K. The efficiency of dehydration expressed by the relaxation time required for the supersaturated air parcel to approach saturation is empirically determined from the matches. A relaxation time of approximately one hour reproduces the second water vapor observation reasonably well, given the first observed water vapor amount and the history of the saturation mixing ratio during advection in the lower TTL.

  18. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  19. Characterization-Based Molecular Design of Bio-Fuel Additives Using Chemometric and Property Clustering Techniques

    International Nuclear Information System (INIS)

    Hada, Subin; Solvason, Charles C.; Eden, Mario R.

    2014-01-01

    In this work, multivariate characterization data such as infrared spectroscopy was used as a source of descriptor data involving information on molecular architecture for designing structured molecules with tailored properties. Application of multivariate statistical techniques such as principal component analysis allowed capturing important features of the molecular architecture from enormous amount of complex data to build appropriate latent variable models. Combining the property clustering techniques and group contribution methods based on characterization (cGCM) data in a reverse problem formulation enabled identifying candidate components by combining or mixing molecular fragments until the resulting properties match the targets. The developed methodology is demonstrated using molecular design of biodiesel additive, which when mixed with off-spec biodiesel produces biodiesel that meets the desired fuel specifications. The contribution of this work is that the complex structures and orientations of the molecule can be included in the design, thereby allowing enumeration of all feasible candidate molecules that matched the identified target but were not part of original training set of molecules.

  20. Characterization-Based Molecular Design of Bio-Fuel Additives Using Chemometric and Property Clustering Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hada, Subin; Solvason, Charles C.; Eden, Mario R., E-mail: edenmar@auburn.edu [Department of Chemical Engineering, Auburn University, Auburn, AL (United States)

    2014-06-10

    In this work, multivariate characterization data such as infrared spectroscopy was used as a source of descriptor data involving information on molecular architecture for designing structured molecules with tailored properties. Application of multivariate statistical techniques such as principal component analysis allowed capturing important features of the molecular architecture from enormous amount of complex data to build appropriate latent variable models. Combining the property clustering techniques and group contribution methods based on characterization (cGCM) data in a reverse problem formulation enabled identifying candidate components by combining or mixing molecular fragments until the resulting properties match the targets. The developed methodology is demonstrated using molecular design of biodiesel additive, which when mixed with off-spec biodiesel produces biodiesel that meets the desired fuel specifications. The contribution of this work is that the complex structures and orientations of the molecule can be included in the design, thereby allowing enumeration of all feasible candidate molecules that matched the identified target but were not part of original training set of molecules.

  1. Comparison of elective inguinal node irradiation techniques in anal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Ji Hye; Seong, Jin Sil; Keum, Ki Chang; Lee, Chang Geol; Koom, Woong Sub [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2011-12-15

    To compare photon thunderbird with deep match (technique 1) with 3-field technique with electron inguinal boost (technique 2) in acute skin toxicity, toxicity-related treatment breaks and patterns of failure in elective inguinal radiation therapy (RT) for curative chemoradiation in anal cancer. Seventeen patients treated between January 2008 and September 2010 without evidence of inguinal and distant metastasis were retrospectively reviewed. In 9 patients with technique 1, dose to inguinal and whole pelvis area was 41.4 to 45 Gy and total dose was 59.4 Gy. In 8 patients with technique 2, doses to inguinal, whole pelvis, gross tumor were 36 to 41.4 Gy, 36 to 41.4 Gy, and 45 to 54 Gy, respectively. The median follow-up period was 27.6 and 14.8 months in group technique 1 and 2, respectively. The incidences of grade 3 radiation dermatitis were 56% (5 patients) and 50% (4 patients), dose ranges grade 3 dermatitis appeared were 41.4 to 50.4 Gy and 45 to 54 Gy in group technique 1 and 2, respectively (p = 0.819). The areas affected by grade 3 dermatitis in 2 groups were as follow: perianal and perineal areas in 40% and 25%, perianal and inguinal areas in 0% and 50%, and perianal area only in 60% and 25%, respectively (p = 0.196). No inguinal failure has been observed. Photon thunderbird with deep match technique and 3-field technique with electron inguinal boost showed similar incidence of radiation dermatitis. However, photon thunderbird with deep match seems to increase the possibility of severe perineal dermatitis.

  2. Comparison of elective inguinal node irradiation techniques in anal cancer

    International Nuclear Information System (INIS)

    Cha, Ji Hye; Seong, Jin Sil; Keum, Ki Chang; Lee, Chang Geol; Koom, Woong Sub

    2011-01-01

    To compare photon thunderbird with deep match (technique 1) with 3-field technique with electron inguinal boost (technique 2) in acute skin toxicity, toxicity-related treatment breaks and patterns of failure in elective inguinal radiation therapy (RT) for curative chemoradiation in anal cancer. Seventeen patients treated between January 2008 and September 2010 without evidence of inguinal and distant metastasis were retrospectively reviewed. In 9 patients with technique 1, dose to inguinal and whole pelvis area was 41.4 to 45 Gy and total dose was 59.4 Gy. In 8 patients with technique 2, doses to inguinal, whole pelvis, gross tumor were 36 to 41.4 Gy, 36 to 41.4 Gy, and 45 to 54 Gy, respectively. The median follow-up period was 27.6 and 14.8 months in group technique 1 and 2, respectively. The incidences of grade 3 radiation dermatitis were 56% (5 patients) and 50% (4 patients), dose ranges grade 3 dermatitis appeared were 41.4 to 50.4 Gy and 45 to 54 Gy in group technique 1 and 2, respectively (p = 0.819). The areas affected by grade 3 dermatitis in 2 groups were as follow: perianal and perineal areas in 40% and 25%, perianal and inguinal areas in 0% and 50%, and perianal area only in 60% and 25%, respectively (p = 0.196). No inguinal failure has been observed. Photon thunderbird with deep match technique and 3-field technique with electron inguinal boost showed similar incidence of radiation dermatitis. However, photon thunderbird with deep match seems to increase the possibility of severe perineal dermatitis.

  3. Interpreting cost of ownership for mix-and-match lithography

    Science.gov (United States)

    Levine, Alan L.; Bergendahl, Albert S.

    1994-05-01

    Cost of ownership modeling is a critical and emerging tool that provides significant insight into the ways to optimize device manufacturing costs. The development of a model to deal with a particular application, mix-and-match lithography, was performed in order to determine the level of cost savings and the optimum ways to create these savings. The use of sensitivity analysis with cost of ownership allows the user to make accurate trade-offs between technology and cost. The use and interpretation of the model results are described in this paper. Parameters analyzed include several manufacturing considerations -- depreciation, maintenance, engineering and operator labor, floorspace, resist, consumables and reticles. Inherent in this study is the ability to customize this analysis for a particular operating environment. Results demonstrate the clear advantages of a mix-and-match approach for three different operating environments. These case studies also demonstrate various methods to efficiently optimize cost savings strategies.

  4. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  5. Stinging Insect Matching Game

    Science.gov (United States)

    ... for Kids ▸ Stinging Insect Matching Game Share | Stinging Insect Matching Game Stinging insects can ruin summer fun for those who are ... the difference between the different kinds of stinging insects in order to keep your summer safe and ...

  6. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  7. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  8. Matchings with Externalities and Attitudes

    DEFF Research Database (Denmark)

    Branzei, Simina; Michalak, Tomasz; Rahwan, Talal

    2013-01-01

    Two-sided matchings are an important theoretical tool used to model markets and social interactions. In many real-life problems the utility of an agent is influenced not only by their own choices, but also by the choices that other agents make. Such an influence is called an externality. Whereas ...... where agents take different attitudes when reasoning about the actions of others. In particular, we study optimistic, neutral and pessimistic attitudes and provide both computational hardness results and polynomial-time algorithms for computing stable outcomes....

  9. 3D shape measurement of moving object with FFT-based spatial matching

    Science.gov (United States)

    Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun

    2018-03-01

    This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.

  10. Frequency domain finite-element and spectral-element acoustic wave modeling using absorbing boundaries and perfectly matched layer

    Science.gov (United States)

    Rahimi Dalkhani, Amin; Javaherian, Abdolrahim; Mahdavi Basir, Hadi

    2018-04-01

    Wave propagation modeling as a vital tool in seismology can be done via several different numerical methods among them are finite-difference, finite-element, and spectral-element methods (FDM, FEM and SEM). Some advanced applications in seismic exploration benefit the frequency domain modeling. Regarding flexibility in complex geological models and dealing with the free surface boundary condition, we studied the frequency domain acoustic wave equation using FEM and SEM. The results demonstrated that the frequency domain FEM and SEM have a good accuracy and numerical efficiency with the second order interpolation polynomials. Furthermore, we developed the second order Clayton and Engquist absorbing boundary condition (CE-ABC2) and compared it with the perfectly matched layer (PML) for the frequency domain FEM and SEM. In spite of PML method, CE-ABC2 does not add any additional computational cost to the modeling except assembling boundary matrices. As a result, considering CE-ABC2 is more efficient than PML for the frequency domain acoustic wave propagation modeling especially when computational cost is high and high-level absorbing performance is unnecessary.

  11. Automatic history matching of an offshore field in Brazil; Ajuste automatico de historico de um campo offshore no Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Jose P.M. dos [PETROBRAS S.A., Macae, RJ (Brazil). Exploracao e Producao. Bacia de Campos]. E-mail: zepedro@ep-bc.petrobras.com.br; Schiozer, Denis J. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Engenharia de Petroleo]. E-mail: denis@cepetro.unicamp.br

    2000-07-01

    Efficient reservoir management is strongly influenced by good production prediction which depends on a good reservoir characterization. The validation of this characterization, due to the complexity of the dynamics of multiphase flow in porous media and to several geological uncertainties involved in the process, it is obtained through an history matching associated to the study of the reservoir in subject. History matching is usually a very complex task and most of the time it can be a frustrating experience due to the high number of variables to be adjusted to reach a final objective which can be a combination of several matches. Automated history matching techniques were object of several studies but with a limited acceptance due to the large computational effort required. Nowadays, they are becoming more attractive motivated by recent hardware and software developments. This work shows an example of application of automatic history matching using an offshore field in Brazil, with emphasis in the benefits of the use of parallel computing and optimization techniques to reduce the total time of the process. It is shown that although the computational effort is higher, the total time of a reservoir study can be significantly reduced with a higher quality of the results. (author)

  12. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  13. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  14. A mono isocentric radiotherapy technique for craniospinal irradiation using asymmetric jaws

    International Nuclear Information System (INIS)

    Isin, G; Oezyar, E.; Guerdalli, S.; Arslan, G.; Uzal, D.; Atahan, I. L.

    1995-01-01

    Dose distribution across the junction of matching of craniospinal fields (lateral cranial fields and posterior spinal field) is important as severe complications may result if the beams overlap or disease may recurs if the gapping is too conservative. Various techniques have been used to achieve an effective transverse plane match and half-beam block technique is one of these techniques. Here, we describe a mono isocentric technique for the treatment of craniospinal fields using the asymmetric jaws of our linear accelerator (Philips SL-25). Before the clinical application of this non-standard technique, basic dosimetry parameters are evaluated. Asymmetric collimator dose distributions for various asymmetric field sizes were obtained and compared with symmetric dose distributions for 6 MV x-ray. A computerized 3-D water phantom with a pair of ionization chambers (reference and field) was used for dose profiles, isodose distributions and Percentage Depth Dose (PDD) for various asymmetric field sizes and different off axis distances. The measured values of off axis ratios for the interested depths were used in MU calculations. This new mono isocentric technique provides an ideal dose distribution at match-line as there is no need to move the patient during treatment. Use of heavy secondary cerrobend blocks (beam splitters) is eliminated. This technique provides the ease of consequent daily set-up's and fulfills the requirements for a conformal radiotherapy

  15. Haptic spatial matching in near peripersonal space.

    Science.gov (United States)

    Kaas, Amanda L; Mier, Hanneke I van

    2006-04-01

    Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.

  16. Sulphur simulations for East Asia using the MATCH model with meteorological data from ECMWF

    Energy Technology Data Exchange (ETDEWEB)

    Engardt, Magnuz

    2000-03-01

    As part of a model intercomparison exercise, with participants from a number of Asian, European and American institutes, sulphur transport and conversion calculations were conducted over an East Asian domain for 2 different months in 1993. All participants used the same emission inventory and simulated concentration and deposition at a number of prescribed geographic locations. The participants were asked to run their respective model both with standard parameters, and with a set of given parameters, in order to examine the different behaviour of the models. The study included comparison with measured data and model-to-model intercomparisons, notably source-receptor relationships. We hereby describe the MATCH model, used in the study, and report some typical results. We find that although the standard and the prescribed set of model parameters differed significantly in terms of sulphur conversion and wet scavenging rate, the resulting change in atmospheric concentrations and surface depositions only change marginally. We show that it is often more critical to choose a representative gridbox value than selecting a parameter from the suite available. The modelled, near-surface, atmospheric concentration of sulphur in eastern China is typically 5- 10 {mu}g S m{sup -3}, with large areas exceeding 20 {mu}g S m{sup -3}. In southern Japan the values range from 2-5 {mu}g S m{sup -3} . Atmospheric SO{sub 2} dominates over sulphate near the emission regions while sulphate concentrations are higher over e.g. the western Pacific. The sulphur deposition exceeds several g sulphur m{sup -2} year{sup -1} in large areas of China. Southern Japan receives 03-1 g S m{sup -2} year{sup -1}. In January, the total wet deposition roughly equals the dry deposition, in May - when it rains more in the domain - total wet deposition is ca. 50% larger than total dry deposition.

  17. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  18. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  19. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  20. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be