WorldWideScience

Sample records for model matching techniques

  1. A New Approach for Design of Model Matching Controllers for Time Delay Systems by Using GA Technique

    Directory of Open Access Journals (Sweden)

    K. K. D Priyanka

    2015-01-01

    Full Text Available Modeling of physical systems usually results in complex high order dynamic representation. The simulation and design of controller for higher order system is a difficult problem. Normally the cost and complexity of the controller increases with the system order. Hence it is desirable to approximate these models to reduced order model such that these lower order models preserves all salient features of higher order model. Lower order models simplify the understanding of the original higher order system. Modern controller design methods such as Model Matching Technique, LQG produce controllers of order at least equal to that of the plant, usually higher order. These control laws are may be too complex with regards to practical implementation and simpler designs are then sought. For this purpose, one can either reduce the order the plant model prior to controller design, or reduce the controller in the final stage, or both. In the present work, a controller is designed such that the closed loop system which includes a delay response(s matches with those of the chosen model with same time delay as close as possible. Based on desired model, a controller(of higher order is designed using model matching method and is approximated to a lower order one using Approximate Generalized Time Moments (AGTM / Approximate Generalized Markov Moments (AGMM matching technique and Optimal Pade Approximation technique. Genetic Algorithm (GA optimization technique is used to obtain the expansion points one which yields similar response as that of model, minimizing the error between the response of the model and that of designed closed loop system.

  2. Hierarchical model of matching

    Science.gov (United States)

    Pedrycz, Witold; Roventa, Eugene

    1992-01-01

    The issue of matching two fuzzy sets becomes an essential design aspect of many algorithms including fuzzy controllers, pattern classifiers, knowledge-based systems, etc. This paper introduces a new model of matching. Its principal features involve the following: (1) matching carried out with respect to the grades of membership of fuzzy sets as well as some functionals defined on them (like energy, entropy,transom); (2) concepts of hierarchies in the matching model leading to a straightforward distinction between 'local' and 'global' levels of matching; and (3) a distributed character of the model realized as a logic-based neural network.

  3. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  4. PATTERN MATCHING IN MODELS

    Directory of Open Access Journals (Sweden)

    Cristian GEORGESCU

    2005-01-01

    Full Text Available The goal of this paper is to investigate how such a pattern matching could be performed on models,including the definition of the input language as well as the elaboration of efficient matchingalgorithms. Design patterns can be considered reusable micro-architectures that contribute to anoverall system architecture. Frameworks are also closely related to design patterns. Componentsoffer the possibility to radically change the behaviors and services offered by an application bysubstitution or addition of new components, even a long time after deployment. Software testing isanother aspect of reliable development. Testing activities mainly consist in ensuring that a systemimplementation conforms to its specifications.

  5. Cascade trailing-edge noise modeling using a mode-matching technique and the edge-dipole theory

    Science.gov (United States)

    Roger, Michel; François, Benjamin; Moreau, Stéphane

    2016-11-01

    An original analytical approach is proposed to model the broadband trailing-edge noise produced by high-solidity outlet guide vanes in an axial turbomachine. The model is formulated in the frequency domain and first in two dimensions for a preliminary assessment of the method. In a first step the trailing-edge noise sources of a single vane are shown to be equivalent to the onset of a so-called edge dipole, the direct field of which is expanded in a series of plane-wave modes. A criterion for the distance of the dipole to the trailing-edge and a scaling of its amplitude is defined to yield a robust model. In a second step the diffraction of each plane-wave mode is derived considering the cascade as an array of bifurcated waveguides and using a mode-matching technique. The cascade response is finally synthesized by summing the diffracted fields of all cut-on modes to yield upstream and downstream sound power spectral densities. The obtained spectral shapes are physically consistent and the present results show that upstream radiation is typically 3 dB higher than downstream radiation, which has been experimentally observed previously. Even though the trailing-edge noise sources are not vane-to-vane correlated their radiation is strongly determined by a cascade effect that consequently must be accounted for. The interest of the approach is that it can be extended to a three-dimensional annular configuration without resorting to a strip theory approach. As such it is a promising and versatile alternative to previously published methods.

  6. Weed Identification Using An Automated Active Shape Matching (AASM) Technique

    DEFF Research Database (Denmark)

    Swain, K C; Nørremark, Michael; Jørgensen, R N

    2011-01-01

    Weed identification and control is a challenge for intercultural operations in agriculture. As an alternative to chemical pest control, a smart weed identification technique followed by mechanical weed control system could be developed. The proposed smart identification technique works...... on the concept of ‘active shape modelling’ to identify weed and crop plants based on their morphology. The automated active shape matching system (AASM) technique consisted of, i) a Pixelink camera ii) an LTI (Lehrstuhlfuer technische informatik) image processing library, iii) a laptop pc with the Linux OS. A 2......-leaf growth stage model for Solanum nigrum L. (nightshade) is generated from 32 segmented training images in Matlab software environment. Using the AASM algorithm, the leaf model was aligned and placed at the centre of the target plant and a model deformation process carried out. The parameters used...

  7. Parikh Matching in the Streaming Model

    DEFF Research Database (Denmark)

    Lee, Lap-Kei; Lewenstein, Moshe; Zhang, Qin

    2012-01-01

    |-length count vector. In the streaming model one seeks space-efficient algorithms for problems in which there is one pass over the data. We consider Parikh matching in the streaming model. To make this viable we search for substrings whose Parikh-mappings approximately match the input vector. In this paper we...... present upper and lower bounds on the problem of approximate Parikh matching in the streaming model....

  8. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  9. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    Science.gov (United States)

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  10. Stability of the bipartite matching model

    CERN Document Server

    Bušić, Ana; Mairesse, Jean

    2010-01-01

    We consider the bipartite matching model of customers and servers introduced by Caldentey, Kaplan, and Weiss (Adv. Appl. Probab., 2009). Customers and servers play symmetrical roles. There is a finite set C resp. S, of customer, resp. server, classes. Time is discrete and at each time step, one customer and one server arrive in the system according to a joint probability measure on CxS, independently of the past. Also, at each time step, pairs of matched customer and server, if they exist, depart from the system. Authorized matchings are given by a fixed bipartite graph. A matching policy is chosen, which decides how to match when there are several possibilities. Customers/servers that cannot be matched are stored in a buffer. The evolution of the model can be described by a discrete time Markov chain. We study its stability under various admissible matching policies including: ML (Match the Longest), MS (Match the Shortest), FIFO (match the oldest), priorities. There exist natural necessary conditions for st...

  11. An improved perfectly matched layer in the eigenmode expansion technique

    DEFF Research Database (Denmark)

    Gregersen, Niels; Mørk, Jesper

    2008-01-01

    When employing the eigenmode expansion technique (EET), parasitic reflections at the boundary of the computational domain can be suppressed by introducing a perfectly matched layer (PML). However, the traditional PML, suffers from an artificial field divergence limiting its usefulness. We propose...

  12. Role model and prototype matching

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did......Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics). Consequently, many recruitment initiatives include role models to challenge these prototypes....... The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students...

  13. Graphical models and point pattern matching.

    Science.gov (United States)

    Caetano, Tibério S; Caelli, Terry; Schuurmans, Dale; Barone, Dante A C

    2006-10-01

    This paper describes a novel solution to the rigid point pattern matching problem in Euclidean spaces of any dimension. Although we assume rigid motion, jitter is allowed. We present a noniterative, polynomial time algorithm that is guaranteed to find an optimal solution for the noiseless case. First, we model point pattern matching as a weighted graph matching problem, where weights correspond to Euclidean distances between nodes. We then formulate graph matching as a problem of finding a maximum probability configuration in a graphical model. By using graph rigidity arguments, we prove that a sparse graphical model yields equivalent results to the fully connected model in the noiseless case. This allows us to obtain an algorithm that runs in polynomial time and is provably optimal for exact matching between noiseless point sets. For inexact matching, we can still apply the same algorithm to find approximately optimal solutions. Experimental results obtained by our approach show improvements in accuracy over current methods, particularly when matching patterns of different sizes.

  14. An iterative matching and locating technique for borehole microseismic monitoring

    Science.gov (United States)

    Chen, H.; Meng, X.; Niu, F.; Tang, Y.

    2016-12-01

    Microseismic monitoring has been proven to be an effective and valuable technology to image hydraulic fracture geometry. The success of hydraulic fracturing monitoring relies on the detection and characterization (i.e., location and focal mechanism estimation) of a maximum number of induced microseismic events. All the events are important to quantify the stimulated reservior volume (SRV) and characterize the newly created fracture network. Detecting and locating low magnitude events, however, are notoriously difficult, particularly at a high noisy production environment. Here we propose an iterative matching and locating technique (iMLT) to obtain a maximum detection of small events and the best determination of their locations from continuous data recorded by a single azimuth downhole geophone array. As the downhole array is located in one azimuth, the regular M&L using the P-wave cross-correlation only is not able to resolve the location of a matched event relative to the template event. We thus introduce the polarization direction in the matching, which significantly improve the lateral resolution of the M&L method based on numerical simulations with synthetic data. Our synthetic tests further indicate that the inclusion of S-wave cross-correlation data can help better constrain the focal depth of the matched events. We apply this method to a dataset recorded during hydraulic fracturing treatment of a pilot horizontal well within the shale play in southwest China. Our approach yields a more than fourfold increase in the number of located events, compared with the original event catalog from traditional downhole processing.

  15. History Matching: Towards Geologically Reasonable Models

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Cordua, Knud Skou; Mosegaard, Klaus

    This work focuses on the development of a new method for history matching problem that through a deterministic search finds a geologically feasible solution. Complex geology is taken into account evaluating multiple point statistics from earth model prototypes - training images. Further a functio...

  16. A surface-matching technique for robot-assisted registration.

    Science.gov (United States)

    Glozman, D; Shoham, M; Fischer, A

    2001-01-01

    Successful implementation of robot-assisted surgery (RAS) requires coherent integration of spatial image data with sensing and actuating devices, each having its own coordinate system. Hence, accurate estimation of the geometric relationships between relevant reference frames, known as registration, is a crucial procedure in all RAS applications. The purpose of this paper is to present a new registration scheme, along with the results of an experimental evaluation of a robot-assisted registration method for RAS applications in orthopedics. The accuracy of the proposed registration is appropriate for specified orthopedic surgical applications such as Total Knee Replacement. The registration method is based on a surface-matching algorithm that does not require marker implants, thereby reducing surgical invasiveness. Points on the bone surface are sampled by the robot, which in turn directs the surgical tool. This technique eliminates additional coordinate transformations to an external device (such as a digitizer), resulting in increased surgical accuracy. The registration technique was tested on an RSPR six-degrees-of-freedom parallel robot specifically designed for medical applications. A six-axis force sensor attached to the robot's moving platform enables fast and accurate acquisition of positions and surface normal directions at sampled points. Sampling with a robot probe was shown to be accurate, fast, and easy to perform. The whole procedure takes about 2 min, with the robot performing most of the registration procedures, leaving the surgeon's hands free. Robotic registration was shown to provide a flawless link between preoperative planning and robotic assistance during surgery.

  17. Feature Matching in Time Series Modelling

    CERN Document Server

    Xia, Yingcun

    2011-01-01

    Using a time series model to mimic an observed time series has a long history. However, with regard to this objective, conventional estimation methods for discrete-time dynamical models are frequently found to be wanting. In the absence of a true model, we prefer an alternative approach to conventional model fitting that typically involves one-step-ahead prediction errors. Our primary aim is to match the joint probability distribution of the observable time series, including long-term features of the dynamics that underpin the data, such as cycles, long memory and others, rather than short-term prediction. For want of a better name, we call this specific aim {\\it feature matching}. The challenges of model mis-specification, measurement errors and the scarcity of data are forever present in real time series modelling. In this paper, by synthesizing earlier attempts into an extended-likelihood, we develop a systematic approach to empirical time series analysis to address these challenges and to aim at achieving...

  18. Matching models of left ventricle and systemic artery

    Institute of Scientific and Technical Information of China (English)

    柳兆荣; 吴驰

    1997-01-01

    To reveal how the matching models of the left ventricle and its afterload affect the pressure and flow in the aortic root, the differences between the measured pressure and flow waveforms and those determined by three kinds of matching model were compared. The results showed that, compared with the results by both matching models 1 and 2, the pressure and flow waveforms determined by matching model 3 established in this work were in the closest agreement with the corresponding experimental waveforms, therefore indicating that matching model 3 was a matching model that closely and rationally characterized the match between the left ventricle and the systemic artery.

  19. Object matching using a locally affine invariant and linear programming techniques.

    Science.gov (United States)

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  20. PM-PM: PatchMatch with Potts Model for object segmentation and stereo matching.

    Science.gov (United States)

    Xu, Shibiao; Zhang, Feihu; He, Xiaofei; Shen, Xukun; Zhang, Xiaopeng

    2015-07-01

    This paper presents a unified variational formulation for joint object segmentation and stereo matching, which takes both accuracy and efficiency into account. In our approach, depth-map consists of compact objects, each object is represented through three different aspects: 1) the perimeter in image space; 2) the slanted object depth plane; and 3) the planar bias, which is to add an additional level of detail on top of each object plane in order to model depth variations within an object. Compared with traditional high quality solving methods in low level, we use a convex formulation of the multilabel Potts Model with PatchMatch stereo techniques to generate depth-map at each image in object level and show that accurate multiple view reconstruction can be achieved with our formulation by means of induced homography without discretization or staircasing artifacts. Our model is formulated as an energy minimization that is optimized via a fast primal-dual algorithm, which can handle several hundred object depth segments efficiently. Performance evaluations in the Middlebury benchmark data sets show that our method outperforms the traditional integer-valued disparity strategy as well as the original PatchMatch algorithm and its variants in subpixel accurate disparity estimation. The proposed algorithm is also evaluated and shown to produce consistently good results for various real-world data sets (KITTI benchmark data sets and multiview benchmark data sets).

  1. Strong solutions of semilinear matched microstructure models

    CERN Document Server

    Escher, Joachim

    2011-01-01

    The subject of this article is a matched microstructure model for Newtonian fluid flows in fractured porous media. This is a homogenized model which takes the form of two coupled parabolic differential equations with boundary conditions in a given (two-scale) domain in Euclidean space. The main objective is to establish the local well-posedness in the strong sense of the flow. Two main settings are investigated: semi-linear systems with linear boundary conditions and semi-linear systems with nonlinear boundary conditions. With the help of analytic semigoups we establish local well-posedness and investigate the long-time behaviour of the solutions in the first case: we establish global existence and show that solutions converge to zero at an exponential rate.

  2. Mask process matching using a model based data preparation solution

    Science.gov (United States)

    Dillon, Brian; Saib, Mohamed; Figueiro, Thiago; Petroni, Paolo; Progler, Chris; Schiavone, Patrick

    2015-10-01

    Process matching is the ability to precisely reproduce the signature of a given fabrication process while using a different one. A process signature is typically described as systematic CD variation driven by feature geometry as a function of feature size, local density or distance to neighboring structures. The interest of performing process matching is usually to address differences in the mask fabrication process without altering the signature of the mask, which is already validated by OPC models and already used in production. The need for such process matching typically arises from the expansion of the production capacity within the same or different mask fabrication facilities, from the introduction of new, perhaps more advanced, equipment to deliver same process of record masks and/or from the re-alignment of processes which have altered over time. For state-of-the-art logic and memory mask processes, such matching requirements can be well below 2nm and are expected to reduce below 1nm in near future. In this paper, a data preparation solution for process matching is presented and discussed. Instead of adapting the physical process itself, a calibrated model is used to modify the data to be exposed by the source process in order to induce the results to match the one obtained while running the target process. This strategy consists in using the differences among measurements from the source and target processes, in the calibration of a single differential model. In this approach, no information other than the metrology results is required from either process. Experimental results were obtained by matching two different processes at Photronics. The standard deviation between both processes was of 2.4nm. After applying the process matching technique, the average absolute difference between the processes was reduced to 1.0nm with a standard deviation of 1.3nm. The methods used to achieve the result will be described along with implementation considerations, to

  3. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  4. Motion estimation based on an improved block matching technique

    Institute of Scientific and Technical Information of China (English)

    Tangfei Tao; Chongzhao Han; Yanqi Wu; Xin Kang

    2006-01-01

    @@ An improved block-matching algorithm for fast motion estimation is proposed. The matching criterion is the sum of absolute difference. The basic idea is to obtain the best estimation of motion vectors by an optimization of the search process which can terminate the time-consuming computation of matching evaluation between the current block and the ineligible candidate block as early as possible and eliminate the search positions as many as possible in the search area. The performance of this algorithm is evaluated by theoretic analysis and compared with the full search algorithm (FSA). The simulation results demonstrate that the computation load of this algorithm is much less than that of FSA, and the motion vectors obtained by this algorithm are identical to those of FSA.

  5. Consideration of impedance matching techniques for efficient piezoelectric energy harvesting.

    Science.gov (United States)

    Kim, Hyeoungwoo; Priya, Shashank; Stephanou, Harry; Uchino, Kenji

    2007-09-01

    This study investigates multiple levels of impedance-matching methods for piezoelectric energy harvesting in order to enhance the conversion of mechanical to electrical energy. First, the transduction rate was improved by using a high piezoelectric voltage constant (g) ceramic material having a magnitude of g33 = 40 x 10(-3) V m/N. Second, a transducer structure, cymbal, was optimized and fabricated to match the mechanical impedance of vibration source to that of the piezoelectric transducer. The cymbal transducer was found to exhibit approximately 40 times higher effective strain coefficient than the piezoelectric ceramics. Third, the electrical impedance matching for the energy harvesting circuit was considered to allow the transfer of generated power to a storage media. It was found that, by using the 10-layer ceramics instead of the single layer, the output current can be increased by 10 times, and the output load can be reduced by 40 times. Furthermore, by using the multilayer ceramics the output power was found to increase by 100%. A direct current (DC)-DC buck converter was fabricated to transfer the accumulated electrical energy in a capacitor to a lower output load. The converter was optimized such that it required less than 5 mW for operation.

  6. An accelerated image matching technique for UAV orthoimage registration

    Science.gov (United States)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A ;Sorting Ring; is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  7. An improved perfectly matched layer for the eigenmode expansion technique

    DEFF Research Database (Denmark)

    Gregersen, Niels; Mørk, Jesper

    2008-01-01

    When performing optical simulations for rotationally symmetric geometries using the eigenmode expansion technique, it is necessary to place the geometry under investigation inside a cylinder with perfectly conducting walls. The parasitic reflections at the boundary of the computational domain can...

  8. Synthesizing MR contrast and resolution through a patch matching technique

    Science.gov (United States)

    Roy, Snehashis; Carass, Aaron; Prince, Jerry L.

    2010-03-01

    Tissue contrast and resolution of magnetic resonance neuroimaging data have strong impacts on the utility of the data in clinical and neuroscience tasks such as registration and segmentation. Lengthy acquisition times typically prevent routine acquisition of multiple MR tissue contrast images at high resolution, and the opportunity for detailed analysis using these data would seem to be irrevocably lost. This paper describes an example based approach using patch matching from a multiple resolution multiple contrast atlas in order to change an image's resolution as well as its MR tissue contrast from one pulse-sequence to that of another. The use of this approach to generate different tissue contrasts (T2/PD/FLAIR) from a single T1-weighted image is demonstrated on both phantom and real images.

  9. A Computer Aided Broad Band Impedance Matching Technique Using a Comparison Reflectometer. Ph.D. Thesis

    Science.gov (United States)

    Gordy, R. S.

    1972-01-01

    An improved broadband impedance matching technique was developed. The technique is capable of resolving points in the waveguide which generate reflected energy. A version of the comparison reflectometer was developed and fabricated to determine the mean amplitude of the reflection coefficient excited at points in the guide as a function of distance, and the complex reflection coefficient of a specific discontinuity in the guide as a function of frequency. An impedance matching computer program was developed which is capable of impedance matching the characteristics of each disturbance independent of other reflections in the guide. The characteristics of four standard matching elements were compiled, and their associated curves of reflection coefficient and shunt susceptance as a function of frequency are presented. It is concluded that an economical, fast, and reliable impedance matching technique has been established which can provide broadband impedance matches.

  10. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    Energy Technology Data Exchange (ETDEWEB)

    Ferragut, Erik M [ORNL; Laska, Jason A [ORNL

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  11. Interface Matching and Combining Techniques for Services Integration

    CERN Document Server

    Mouël, Frédéric Le; Frénot, Stéphane

    2008-01-01

    The development of many highly dynamic environments, like pervasive environments, introduces the possibility to use geographically close-related services. Dynamically integrating and unintegrating these services in running applications is a key challenge for this use. In this article, we classify service integration issues according to interfaces exported by services and internal combining techniques. We also propose a contextual integration service, IntegServ, and an interface, Integrable, for developing services.

  12. Generic Energy Matching Model and Figure of Matching Algorithm for Combined Renewable Energy Systems

    Directory of Open Access Journals (Sweden)

    J.C. Brezet

    2009-08-01

    Full Text Available In this paper the Energy Matching Model and Figure of Matching Algorithm which originally was dedicated only to photovoltaic (PV systems [1] are extended towards a Model and Algorithm suitable for combined systems which are a result of integration of two or more renewable energy sources into one. The systems under investigation will range from mobile portable devices up to the large renewable energy system conceivably to be applied at the Afsluitdijk (Closure- dike in the north of the Netherlands. This Afsluitdijk is the major dam in the Netherlands, damming off the Zuiderzee, a salt water inlet of the North Sea and turning it into the fresh water lake of the IJsselmeer. The energy chain of power supplies based on a combination of renewable energy sources can be modeled by using one generic Energy Matching Model as starting point.

  13. Generic Energy Matching Model and Figure of Matching Algorithm for Combined Renewable Energy Systems

    Directory of Open Access Journals (Sweden)

    S. Y. Kan

    2009-08-01

    Full Text Available In this paper the Energy Matching Model and Figure of Matching Algorithm which originally was dedicated only to photovoltaic (PV systems [1] are extended towards a Model and Algorithm suitable for combined systems which are a result of integration of two or more renewable energy sources into one. The systems under investigation will range from mobile portable devices up to the large renewable energy system conceivably to be applied at the Afsluitdijk (Closure- dike in the north of the Netherlands. This Afsluitdijk is the major dam in the Netherlands, damming off the Zuiderzee, a salt water inlet of the North Sea and turning it into the fresh water lake of the IJsselmeer. The energy chain of power supplies based on a combination of renewable energy sources can be modeled by using one generic Energy Matching Model as starting point.

  14. The application of computer color matching techniques to the matching of target colors in a food substrate: a first step in the development of foods with customized appearance.

    Science.gov (United States)

    Kim, Sandra; Golding, Matt; Archer, Richard H

    2012-06-01

    A predictive color matching model based on the colorimetric technique was developed and used to calculate the concentrations of primary food dyes needed in a model food substrate to match a set of standard tile colors. This research is the first stage in the development of novel three-dimensional (3D) foods in which color images or designs can be rapidly reproduced in 3D form. Absorption coefficients were derived for each dye, from a concentration series in the model substrate, a microwave-baked cake. When used in a linear, additive blending model these coefficients were able to predict cake color from selected dye blends to within 3 ΔE*(ab,10) color difference units, or within the limit of a visually acceptable match. Absorption coefficients were converted to pseudo X₁₀, Y₁₀, and Z₁₀ tri-stimulus values (X₁₀(P), Y₁₀(P), Z₁₀(P)) for colorimetric matching. The Allen algorithm was used to calculate dye concentrations to match the X₁₀(P), Y₁₀(P), and Z₁₀(P) values of each tile color. Several recipes for each color were computed with the tile specular component included or excluded, and tested in the cake. Some tile colors proved out-of-gamut, limited by legal dye concentrations; these were scaled to within legal range. Actual differences suggest reasonable visual matches could be achieved for within-gamut tile colors. The Allen algorithm, with appropriate adjustments of concentration outputs, could provide a sufficiently rapid and accurate calculation tool for 3D color food printing. The predictive color matching approach shows potential for use in a novel embodiment of 3D food printing in which a color image or design could be rendered within a food matrix through the selective blending of primary dyes to reproduce each color element. The on-demand nature of this food application requires rapid color outputs which could be provided by the color matching technique, currently used in nonfood industries, rather than by empirical food

  15. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  16. Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2015-01-01

    Full Text Available This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

  17. Model Reduction by Moment Matching for Linear Switched Systems

    DEFF Research Database (Denmark)

    Bastug, Mert; Petreczky, Mihaly; Wisniewski, Rafal;

    2014-01-01

    A moment-matching method for the model reduction of linear switched systems (LSSs) is developed. The method is based based upon a partial realization theory of LSSs and it is similar to the Krylov subspace methods used for moment matching for linear systems. The results are illustrated by numeric...

  18. Anticipated growth and business cycles in matching models

    NARCIS (Netherlands)

    den Haan, W.J.; Kaltenbrunner, G.

    2009-01-01

    In a business cycle model that incorporates a standard matching framework, employment increases in response to news shocks, even though the wealth effect associated with the increase in expected productivity reduces labor force participation. The reason is that the matching friction induces

  19. Using visual analytics model for pattern matching in surveillance data

    Science.gov (United States)

    Habibi, Mohammad S.

    2013-03-01

    In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.

  20. Money creation in a random matching model

    OpenAIRE

    Alexei Deviatov

    2004-01-01

    I study money creation in versions of the Trejos-Wright (1995) and Shi (1995) models with indivisible money and individual holdings bounded at two units. I work with the same class of policies as in Deviatov and Wallace (2001), who study money creation in that model. However, I consider an alternative notion of implementability–the ex ante pairwise core. I compute a set of numerical examples to determine whether money creation is beneficial. I find beneficial e?ects of money creation if indiv...

  1. Enhanced Stereo Matching Technique using Image Gradient for Improved Search Time

    Directory of Open Access Journals (Sweden)

    Pratibha Vellanki

    2011-05-01

    Full Text Available Stereo matching algorithms developed from local area based correspondence matching involve extensive search. This paper presents a stereo matching technique for computation of a dense disparity map by trimming down the search time for the local area based correspondence matching. We use constraints such as epipolar line, limiting the disparity, uniqueness and continuity to obtain an initial dense disparity map. We attempt to improvise this map by using color information for matching. A new approach has been discussed which is based on the extension of the continuity constraint for reducing the search time. We use correspondence between rows and gradient of image to compute the disparity. Thus we achieve a good trade off between accuracy and search time.

  2. Automated curve matching techniques for reproducible, high-resolution palaeomagnetic dating

    Science.gov (United States)

    Lurcock, Pontus; Channell, James

    2016-04-01

    High-resolution relative palaeointensity (RPI) and palaeosecular variation (PSV) data are increasingly important for accurate dating of sedimentary sequences, often in combination with oxygen isotope (δ18O) measurements. A chronology is established by matching a measured downcore signal to a dated reference curve, but there is no standard methodology for performing this correlation. Traditionally, matching is done by eye, but this becomes difficult when two parameters (e.g. RPI and δ18O) are being matched simultaneously, and cannot be done entirely objectively or repeatably. More recently, various automated techniques have appeared for matching one or more signals. We present Scoter, a user-friendly program for dating by signal matching and for comparing different matching techniques. Scoter is a cross-platform application implemented in Python, and consists of a general-purpose signal processing and correlation library linked to a graphical desktop front-end. RPI, PSV, and other records can be opened, pre-processed, and automatically matched with reference curves. A Scoter project can be exported as a self-contained bundle, encapsulating the input data, pre-processing steps, and correlation parameters, as well as the program itself. The analysis can be automatically replicated by anyone using only the resources in the bundle, ensuring full reproducibility. The current version of Scoter incorporates an experimental signal-matching algorithm based on simulated annealing, as well as an interface to the well-established Match program of Lisiecki and Lisiecki (2002), enabling results of the two approaches to be compared directly.

  3. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Energy Technology Data Exchange (ETDEWEB)

    Stubberud, Peter A., E-mail: stubber@ee.unlv.edu [Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, Las Vegas, NV 89154 (United States); Stubberud, Stephen C., E-mail: scstubberud@ieee.org [Oakridge Technology, San Diego, CA 92121 (United States); Stubberud, Allen R., E-mail: stubberud@att.net [Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA 92697 (United States)

    2014-12-10

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design

  4. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Science.gov (United States)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-12-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT

  5. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  6. An analysis of matching cognitive-behavior therapy techniques to learning styles.

    Science.gov (United States)

    van Doorn, Karlijn; McManus, Freda; Yiend, Jenny

    2012-12-01

    To optimize the effectiveness of cognitive-behavior therapy (CBT) for each individual patient, it is important to discern whether different intervention techniques may be differentially effective. One factor influencing the differential effectiveness of CBT intervention techniques may be the patient's preferred learning style, and whether this is 'matched' to the intervention. The current study uses a retrospective analysis to examine whether the impact of two common CBT interventions (thought records and behavioral experiments) is greater when the intervention is either matched or mismatched to the individual's learning style. Results from this study give some indication that greater belief change is achieved when the intervention technique is matched to participants' learning style, than when intervention techniques are mismatched to learning style. Conclusions are limited by the retrospective nature of the analysis and the limited dose of the intervention in non-clinical participants. Results suggest that further investigation of the impact of matching the patient's learning style to CBT intervention techniques is warranted, using clinical samples with higher dose interventions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. "Living in sin" and marriage : A matching model

    NARCIS (Netherlands)

    Sahib, PR; Gu, XH

    2002-01-01

    This paper develops a two sided matching model of premarital cohabitation and marriage in which premarital cohabitation serves as a period of learning. We solve for the optimal policy to be followed by individuals by treating the model as a three stage dynamic programming problem. We find that coupl

  8. "Living in sin" and marriage : a matching model

    NARCIS (Netherlands)

    Rao Sahib, P. Padma; Gu, X. Xinhua

    1999-01-01

    This paper develops a two sided matching model of premarital cohabitation and marriage in which premarital cohabitation serves as a period of learning. We solve for the optimal policy to be followed by individuals by treating the model as a three stage dynamic programming problem. We find that coupl

  9. "Living in sin" and marriage : A matching model

    NARCIS (Netherlands)

    Sahib, PR; Gu, XH

    This paper develops a two sided matching model of premarital cohabitation and marriage in which premarital cohabitation serves as a period of learning. We solve for the optimal policy to be followed by individuals by treating the model as a three stage dynamic programming problem. We find that

  10. "Living in sin" and marriage : a matching model

    NARCIS (Netherlands)

    Rao Sahib, P. Padma; Gu, X. Xinhua

    1999-01-01

    This paper develops a two sided matching model of premarital cohabitation and marriage in which premarital cohabitation serves as a period of learning. We solve for the optimal policy to be followed by individuals by treating the model as a three stage dynamic programming problem. We find that

  11. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.; Hanea, R.G.; Heemink, A.W.; Jansen, J.D.

    2010-01-01

    Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to g

  12. Keefektifan Model Kooperatif Tipe Make A Match dan Model CPS Terhadap Kemampuan Pemecahan Masalah dan Motivasi Belajar

    Directory of Open Access Journals (Sweden)

    Nur Fitri Amalia

    2013-12-01

    Full Text Available AbstrakTujuan penelitian ini adalah untuk mengetahui keefektifan model kooperatif tipe Make a Match dan model CPS terhadap kemampuan pemecahan masalah dan motivasi belajar sis-wa kelas X pada materi persamaan dan fungsi kuadrat. Populasi dalam penelitian ini adalah siswa kelas X SMA N 1 Subah tahun ajaran 2013/2014. Sampel dalam penelitian ini diam-bil dengan teknik random sampling, yaitu teknik pengambilan sampel dengan acak. Kelas X8 terpilih sebagai kelas eksperimen I dengan penerapan model kooperatif tipe Make a Match dan kelas X7 terpilih sebagai kelas eksperimen II dengan penerapan model CPS. Da-ta hasil penelitian diperoleh dengan tes dan pemberian angket untuk kemudian dianalisis menggunakan uji proporsi dan uji t. Hasil penelitian adalah (1 implementasi model koope-ratif tipe Make a Match efektif terhadap kemampuan pemecahan masalah; (2 implementasi model CPS efektif terhadap kemampuan pemecahan masalah; (3 implementasi model koo-peratif tipe Make a Match lebih baik daripada model CPS terhadap kemampuan pecahan masalah; (4 implementasi model CPS lebih baik daripada model kooperatif tipe Make a Match terhadap motivasi belajar.Kata Kunci:       Make A Match; CPS; Pemecahan Masalah; Motivasi  AbstractThe purpose of this study was to determine the effectiveness of cooperative models Make a Match and CPS to problem-solving ability and motivation of students of class X in the equation of matter and quadratic function. The population of this study was the tenth grade students of state senior high school 1 Subah academic year 2013/2014. The samples in this study were taken by random sampling technique, that is sampling techniques with random. Class X8 was selected as the experimental class I with the application of cooperative model make a Match and class X7 was selected as the experimental class II with the application of the CPS. The data were obtained with the administration of a questionnaire to test and then analyzed using the

  13. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    Science.gov (United States)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  14. Equilibrium Price Dispersion in a Matching Model with Divisible Money

    NARCIS (Netherlands)

    Kamiya, K.; Sato, T.

    2002-01-01

    The main purpose of this paper is to show that, for any given parameter values, an equilibrium with dispersed prices (two-price equilibrium) exists in a simple matching model with divisible money presented by Green and Zhou (1998).We also show that our two-price equilibrium is unique in certain envi

  15. Simultaneous exact model matching with stability by output feedback

    Science.gov (United States)

    Kiritsis, Konstadinos H.

    2017-03-01

    In this paper, is studied the problem of simultaneous exact model matching by dynamic output feedback for square and invertible linear time invariant systems. In particular, explicit necessary and sufficient conditions are established which guarantee the solvability of the problem with stability and a procedure is given for the computation of dynamic controller which solves the problem.

  16. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form. By...

  17. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form. By...... of one space robot arm system subjected to failures....

  18. Performance Evaluation of Fingerprint Identification Based on DCT and DWT using Multiple Matching Techniques

    Directory of Open Access Journals (Sweden)

    Lavanya B. N.

    2011-11-01

    Full Text Available The fingerprint is a physiological trait used to identify a person. In this paper, Performance Evaluation of Fingerprint Identification based on DCT and DWT using Multiple Matching Techniques (FDDMM is proposed. The fingerprint is segmented into four cells of each size 150*240. The DCT is applied on each cell. The Harr Wavelet is applied on DCT coefficient of each cell. The directional information features and centre area features are computed on LL sub band. The final Feature Vector is obtained by concatenating Directional Information and Centre Area Features. The matching techniques viz., ED, SVM, and RF are used to compare test image feature with database image features. It is observed that the values of TSR and FRR are better in the case of proposed algorithm compared to existing algorithm.

  19. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  20. Analysis of terrain map matching using multisensing techniques for applications to autonomous vehicle navigation

    Science.gov (United States)

    Page, Lance; Shen, C. N.

    1991-01-01

    This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.

  1. Modelling interfacial cracking with non-matching cohesive interface elements

    Science.gov (United States)

    Nguyen, Vinh Phu; Nguyen, Chi Thanh; Bordas, Stéphane; Heidarpour, Amin

    2016-11-01

    Interfacial cracking occurs in many engineering problems such as delamination in composite laminates, matrix/interface debonding in fibre reinforced composites etc. Computational modelling of these interfacial cracks usually employs compatible or matching cohesive interface elements. In this paper, incompatible or non-matching cohesive interface elements are proposed for interfacial fracture mechanics problems. They allow non-matching finite element discretisations of the opposite crack faces thus lifting the constraint on the compatible discretisation of the domains sharing the interface. The formulation is based on a discontinuous Galerkin method and works with both initially elastic and rigid cohesive laws. The proposed formulation has the following advantages compared to classical interface elements: (i) non-matching discretisations of the domains and (ii) no high dummy stiffness. Two and three dimensional quasi-static fracture simulations are conducted to demonstrate the method. Our method not only simplifies the meshing process but also it requires less computational demands, compared with standard interface elements, for problems that involve materials/solids having a large mismatch in stiffnesses.

  2. Estimating a marriage matching model with spillover effects.

    Science.gov (United States)

    Choo, Eugene; Siow, Aloysius

    2006-08-01

    We use marriage matching functions to study how marital patterns change when population supplies change. Specifically, we use a behavioral marriage matching function with spillover effects to rationalize marriage and cohabitation behavior in contemporary Canada. The model can estimate a couple's systematic gains to marriage and cohabitation relative to remaining single. These gains are invariant to changes in population supplies. Instead, changes in population supplies redistribute these gains between a couple. Although the model is behavioral, it is nonparametric. It can fit any observed cross-sectional marriage matching distribution. We use the estimated model to quantify the impacts of gender differences in mortality rates and the baby boom on observed marital behavior in Canada. The higher mortality rate of men makes men scarcer than women. We show that the scarceness of men modestly reduced the welfare of women and increased the welfare of men in the marriage market. On the other hand, the baby boom increased older men's net gains to entering the marriage market and lowered middle-aged women's net gains.

  3. Conversion efficiency enhancement technique for a quasiphase matched second-harmonic generation device

    Science.gov (United States)

    Shinozaki, Keisuke; Takamori, Takeshi; Watanabe, Kenji; Fukunaga, Toshiaki; Kamijoh, Takeshi

    1992-07-01

    Conversion efficiency enhancement techniques have been demonstrated for quasi-phase matched (QPM) second-harmonic generation (SHG). First, a technique is described for confining the high fundamental optical power density in a waveguide with a domain-inverted grating (SHG waveguide) (i.e., a technique for monolithic integration), the SHG waveguide, and a distributed Bragg reflector (DBR). A 40-percent increase in the conversion compared with a conventional device without a DBR was achieved under QPM conditions. Also described is a method of automatically satisfying QPM conditions, using a laser diode (LD) with antireflection-coated facets. The InP/InGaAsP LDs were used and it was confirmed that the LD oscillated at a wavelength satisfying the QPM conditions. The normalized conversion efficiency was 4.1 percent/W per sq cm.

  4. Model-based segmentation of medical imagery by matching distributions.

    Science.gov (United States)

    Freedman, Daniel; Radke, Richard J; Zhang, Tao; Jeong, Yongwon; Lovelock, D Michael; Chen, George T Y

    2005-03-01

    The segmentation of deformable objects from three-dimensional (3-D) images is an important and challenging problem, especially in the context of medical imagery. We present a new segmentation algorithm based on matching probability distributions of photometric variables that incorporates learned shape and appearance models for the objects of interest. The main innovation over similar approaches is that there is no need to compute a pixelwise correspondence between the model and the image. This allows for a fast, principled algorithm. We present promising results on difficult imagery for 3-D computed tomography images of the male pelvis for the purpose of image-guided radiotherapy of the prostate.

  5. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  6. Cross-matching: A modified cross-correlation underlying threshold energy model and match-based depth perception

    Directory of Open Access Journals (Sweden)

    Takahiro eDoi

    2014-10-01

    Full Text Available Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye’s image can be matched to multiple parts of the other eye’s image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs, because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs, in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term ‘cross-matching’, represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS; this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth, and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in

  7. Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases.

    Science.gov (United States)

    Kelley, Brendan P; Klochko, Chad; Halabi, Safwan; Siegal, Daniel

    2016-06-01

    Retrospective data mining has tremendous potential in research but is time and labor intensive. Current data mining software contains many advanced search features but is limited in its ability to identify patients who meet multiple complex independent search criteria. Simple keyword and Boolean search techniques are ineffective when more complex searches are required, or when a search for multiple mutually inclusive variables becomes important. This is particularly true when trying to identify patients with a set of specific radiologic findings or proximity in time across multiple different imaging modalities. Another challenge that arises in retrospective data mining is that much variation still exists in how image findings are described in radiology reports. We present an algorithmic approach to solve this problem and describe a specific use case scenario in which we applied our technique to a real-world data set in order to identify patients who matched several independent variables in our institution's picture archiving and communication systems (PACS) database.

  8. A New Model for a Carpool Matching Service.

    Science.gov (United States)

    Xia, Jizhe; Curtin, Kevin M; Li, Weihong; Zhao, Yonglong

    2015-01-01

    Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China) to demonstrate how carpool teams can be determined.

  9. A New Model for a Carpool Matching Service.

    Directory of Open Access Journals (Sweden)

    Jizhe Xia

    Full Text Available Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China to demonstrate how carpool teams can be determined.

  10. Matching occupation and self: does matching theory adequately model children's thinking?

    Science.gov (United States)

    Watson, Mark; McMahon, Mary

    2004-10-01

    The present exploratory-descriptive cross-national study focused on the career development of 11- to 14-yr.-old children, in particular whether they can match their personal characteristics with their occupational aspirations. Further, the study explored whether their matching may be explained in terms of a fit between person and environment using Holland's theory as an example. Participants included 511 South African and 372 Australian children. Findings relate to two items of the Revised Career Awareness Survey that require children to relate personal-social knowledge to their favorite occupation. Data were analyzed in three stages using descriptive statistics, i.e., mean scores, frequencies, and percentage agreement. The study indicated that children perceived their personal characteristics to be related to their occupational aspirations. However, how this matching takes place is not adequately accounted for in terms of a career theory such as that of Holland.

  11. Design of InP DHBT power amplifiers at millimeter-wave frequencies using interstage matched cascode technique

    DEFF Research Database (Denmark)

    Yan, Lei; Johansen, Tom Keinicke

    2013-01-01

    In this paper, the design of InP DHBT based millimeter-wave(mm-wave) power amplifiers(PAs) using an interstage matched cascode technique is presented. The output power of a traditional cascode is limited by the early saturation of the common-base(CB) device. The interstage matched cascode can be ...

  12. Accuracy of pitch matching significantly improved by live voice model.

    Science.gov (United States)

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  13. Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques

    Science.gov (United States)

    Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.

    1990-01-01

    Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.

  14. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    Science.gov (United States)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  15. Automatic Whole-Spectrum Matching Techniques for Identification of Pure and Mixed Minerals using Raman Spectroscopy

    Science.gov (United States)

    Dyar, M. D.; Carey, C. J.; Breitenfeld, L.; Tague, T.; Wang, P.

    2015-12-01

    In situuse of Raman spectroscopy on Mars is planned for three different instruments in the next decade. Although implementations differ, they share the potential to identify surface minerals and organics and inform Martian geology and geochemistry. Their success depends on the availability of appropriate databases and software for phase identification. For this project, we have consolidated all known publicly-accessible Raman data on minerals for which independent confirmation of phase identity is available, and added hundreds of additional spectra acquired using varying instruments and laser energies. Using these data, we have developed software tools to improve mineral identification accuracy. For pure minerals, whole-spectrum matching algorithms far outperform existing tools based on diagnostic peaks in individual phases. Optimal matching accuracy does depend on subjective end-user choices for data processing (such as baseline removal, intensity normalization, and intensity squashing), as well as specific dataset characteristics. So, to make this tuning process amenable to automated optimization methods, we developed a machine learning-based generalization of these choices within a preprocessing and matching framework. Our novel method dramatically reduces the burden on the user and results in improved matching accuracy. Moving beyond identifying pure phases into quantification of relative abundances is a complex problem because relationships between peak intensity and mineral abundance are obscured by complicating factors: exciting laser frequency, the Raman cross section of the mineral, crystal orientation, and long-range chemical and structural ordering in the crystal lattices. Solving this un-mixing problem requires adaptation of our whole-spectrum algorithms and a large number of test spectra of minerals in known volume proportions, which we are creating for this project. Key to this effort is acquisition of spectra from mixtures of pure minerals paired

  16. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  17. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  18. Improved Topographic Models via Concurrent Airborne LIDAR and Dense Image Matching

    Science.gov (United States)

    Mandlburger, G.; Wenzel, K.; Spitzer, A.; Haala, N.; Glira, P.; Pfeifer, N.

    2017-09-01

    Modern airborne sensors integrate laser scanners and digital cameras for capturing topographic data at high spatial resolution. The capability of penetrating vegetation through small openings in the foliage and the high ranging precision in the cm range have made airborne LiDAR the prime terrain acquisition technique. In the recent years dense image matching evolved rapidly and outperforms laser scanning meanwhile in terms of the achievable spatial resolution of the derived surface models. In our contribution we analyze the inherent properties and review the typical processing chains of both acquisition techniques. In addition, we present potential synergies of jointly processing image and laser data with emphasis on sensor orientation and point cloud fusion for digital surface model derivation. Test data were concurrently acquired with the RIEGL LMS-Q1560 sensor over the city of Melk, Austria, in January 2016 and served as basis for testing innovative processing strategies. We demonstrate that (i) systematic effects in the resulting scanned and matched 3D point clouds can be minimized based on a hybrid orientation procedure, (ii) systematic differences of the individual point clouds are observable at penetrable, vegetated surfaces due to the different measurement principles, and (iii) improved digital surface models can be derived combining the higher density of the matching point cloud and the higher reliability of LiDAR point clouds, especially in the narrow alleys and courtyards of the study site, a medieval city.

  19. Text Character Extraction Implementation from Captured Handwritten Image to Text Conversionusing Template Matching Technique

    Directory of Open Access Journals (Sweden)

    Barate Seema

    2016-01-01

    Full Text Available Images contain various types of useful information that should be extracted whenever required. A various algorithms and methods are proposed to extract text from the given image, and by using that user will be able to access the text from any image. Variations in text may occur because of differences in size, style,orientation, alignment of text, and low image contrast, composite backgrounds make the problem during extraction of text. If we develop an application that extracts and recognizes those texts accurately in real time, then it can be applied to many important applications like document analysis, vehicle license plate extraction, text- based image indexing, etc and many applications have become realities in recent years. To overcome the above problems we develop such application that will convert the image into text by using algorithms, such as bounding box, HSV model, blob analysis,template matching, template generation.

  20. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    Science.gov (United States)

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  1. Fingerprint matching by thin-plate spline modelling of elastic deformations

    NARCIS (Netherlands)

    Bazen, Asker M.; Gerez, Sabih H.

    2003-01-01

    This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching

  2. Optical Cluster-Finding with An Adaptive Matched-Filter Technique: Algorithm and Comparison with Simulations

    CERN Document Server

    Dong, Feng; Gunn, James E; Wechsler, Risa H

    2007-01-01

    We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is ~85% complete and over 90% pure for clusters with masses above 1.0*10^{14} h^{-1} M_solar and redshifts up to z=0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensi...

  3. Optical Cluster-Finding with an Adaptive Matched-Filter Technique: Algorithm and Comparison with Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.

    2007-10-29

    We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.

  4. Data Matching Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection

    CERN Document Server

    Christen, Peter

    2012-01-01

    Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of da

  5. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  6. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  7. A numerical/empirical technique for history matching and predicting cyclic steam performance in Canadian oil sands reservoirs

    Science.gov (United States)

    Leshchyshyn, Theodore Henry

    The oil sands of Alberta contain some one trillion barrels of bitumen-in-place, most contained in the McMurray, Wabiskaw, Clearwater, and Grand Rapids formations. Depth of burial is 0--550 m, 10% of which is surface mineable, the rest recoverable by in-situ technology-driven enhanced oil recovery schemes. To date, significant commercial recovery has been attributed to Cyclic Steam Stimulation (CSS) using vertical wellbores. Other techniques, such as Steam Assisted Gravity Drainage (SAGD) are proving superior to other recovery methods for increasing early oil production but at initial higher development and/or operating costs. Successful optimization of bitumen production rates from the entire reservoir is ultimately decided by the operator's understanding of the reservoir in its original state and/or the positive and negative changes which occur in oil sands and heavy oil deposits upon heat stimulation. Reservoir description is the single most important factor in attaining satisfactory history matches and forecasts for optimized production of the commercially-operated processes. Reservoir characterization which lacks understanding can destroy a project. For example, incorrect assumptions in the geological model for the Wolf Lake Project in northeast Alberta resulted in only about one-half of the predicted recovery by the original field process. It will be shown here why the presence of thin calcite streaks within oil sands can determine the success or failure of a commercial cyclic steam project. A vast amount of field data, mostly from the Primrose Heavy Oil Project (PHOP) near Cold Lake, Alberta, enabled the development a simple set of correlation curves for predicting bitumen production using CSS. A previously calibtrated thermal numerical simulation model was used in its simplist form, that is, a single layer, radial grid blocks, "fingering" or " dilation" adjusted permeability curves, and no simulated fracture, to generate the first cycle production

  8. Faculty Development: A Stage Model Matched to Blended Learning Maturation

    Science.gov (United States)

    Fetters, Michael L.; Duby, Tova Garcia

    2011-01-01

    Faculty development programs are critical to the implementation and support of curriculum innovation. In this case study, the authors present lessons learned from ten years of experience in faculty development programs created to support innovation in technology enhanced learning. Stages of curriculum innovation are matched to stages of faculty…

  9. Perfectly matched layer for an elastic parabolic equation model in ocean acoustics

    Science.gov (United States)

    Xu, Chuanxiu; Zhang, Haigang; Piao, Shengchun; Yang, Shi'e.; Sun, Sipeng; Tang, Jun

    2017-02-01

    The perfectly matched layer (PML) is an effective technique for truncating unbounded domains with minimal spurious reflections. A fluid parabolic equation (PE) model applying PML technique was previously used to analyze the sound propagation problem in a range-dependent waveguide (Lu and Zhu, 2007). However, Lu and Zhu only considered a standard fluid PE to demonstrate the capability of the PML and did not take improved one-way models into consideration. They applied a [1/1] Padé approximant to the parabolic equation. The higher-order PEs are more accurate than standard ones when a very large angle propagation is considered. As for range-dependent problems, the techniques to handle the vertical interface between adjacent regions are mainly energy conserving and single-scattering. In this paper, the PML technique is generalized to the higher order elastic PE, as is to the higher order fluid PE. The correction of energy conserving is used in range-dependent waveguides. Simulation is made in both acoustic cases and seismo-acoustic cases. Range-independent and range-dependent waveguides are both adopted to test the accuracy and efficiency of this method. The numerical results illustrate that a PML is much more effective than an artificial absorbing layer (ABL) both in acoustic and seismo-acoustic sound propagation modeling.

  10. Object Visual Tracking Using Window-Matching Techniques and Kalman Filtering

    OpenAIRE

    2010-01-01

    This work presented an algorithm for tracking objects from a sequence of images. The algorithm is based on a window matching approach that uses as a similarity measurement the sum of the square differences (SSD). In order to improve the tracking performance under

  11. Practical guidance for the use of a pattern-matching technique in case-study research: a case presentation.

    Science.gov (United States)

    Almutairi, Adel F; Gardner, Glenn E; McCarthy, Alexandra

    2014-06-01

    This paper reports on a study that demonstrates how to apply pattern matching as an analytical method in case-study research. Case-study design is appropriate for the investigation of highly-contextualized phenomena that occur within the social world. Case-study design is considered a pragmatic approach that permits employment of multiple methods and data sources in order to attain a rich understanding of the phenomenon under investigation. The findings from such multiple methods can be reconciled in case-study analysis, specifically through a pattern-matching technique. Although this technique is theoretically explained in the literature, there is scant guidance on how to apply the method practically when analyzing data. This paper demonstrates the steps taken during pattern matching in a completed case-study project that investigated the influence of cultural diversity in a multicultural nursing workforce on the quality and safety of patient care. The example highlighted in this paper contributes to the practical understanding of the pattern-matching process, and can also make a substantial contribution to case-study methods.

  12. Scientist Role Models in the Classroom: How Important Is Gender Matching?

    Science.gov (United States)

    Conner, Laura D. Carsten; Danielson, Jennifer

    2016-01-01

    Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used…

  13. A match-mismatch test of a stage model of behaviour change in tobacco smoking

    NARCIS (Netherlands)

    Dijkstra, A; Conijn, B; De Vries, H

    2006-01-01

    Aims An innovation offered by stage models of behaviour change is that of stage-matched interventions. Match-mismatch studies are the primary test of this idea but also the primary test of the validity of stage models. This study aimed at conducting such a test among tobacco smokers using the Social

  14. A match-mismatch test of a stage model of behaviour change in tobacco smoking

    NARCIS (Netherlands)

    Dijkstra, A; Conijn, B; De Vries, H

    2006-01-01

    Aims An innovation offered by stage models of behaviour change is that of stage-matched interventions. Match-mismatch studies are the primary test of this idea but also the primary test of the validity of stage models. This study aimed at conducting such a test among tobacco smokers using the Social

  15. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo, E-mail: kes7741@snu.ac.kr

    2015-04-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches.

  16. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  17. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    OpenAIRE

    2015-01-01

    The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK). The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blo...

  18. Improved colour matching technique for fused nighttime imagery with daytime colours

    Science.gov (United States)

    Hogervorst, Maarten A.; Toet, Alexander

    2016-10-01

    Previously, we presented a method for applying daytime colours to fused nighttime (e.g., intensified and LWIR) imagery (Toet and Hogervorst, Opt.Eng. 51(1), 2012). Our colour mapping not only imparts a natural daylight appearance to multiband nighttime images but also enhances the contrast and visibility of otherwise obscured details. As a result, this colourizing method leads to increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness (Toet e.a., Opt.Eng.53(4), 2014). A crucial step in this colouring process is the choice of a suitable colour mapping scheme. When daytime colour images and multiband sensor images of the same scene are available the colour mapping can be derived from matching image samples (i.e., by relating colour values to sensor signal intensities). When no exact matching reference images are available the colour transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image (Toet, Info. Fus. 4(3), 2003). In the current study we investigated new colour fusion schemes that combine the advantages of the both methods, using the correspondence between multiband sensor values and daytime colours (1st method) in a smooth transformation (2nd method). We designed and evaluated three new fusion schemes that focus on: i) a closer match with the daytime luminances, ii) improved saliency of hot targets and iii) improved discriminability of materials

  19. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    Science.gov (United States)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Wang, Jason J.; Pueyo, Laurent; Nielsen, Eric L.; De Rosa, Robert J.; Czekala, Ian; Marley, Mark S.; Arriaga, Pauline; Bailey, Vanessa P.; Barman, Travis; Bulger, Joanna; Chilcote, Jeffrey; Cotten, Tara; Doyon, Rene; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Gerard, Benjamin L.; Goodsell, Stephen J.; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Marois, Christian; Metchev, Stanimir; Millar-Blanchaer, Maxwell A.; Morzinski, Katie M.; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall; Poyneer, Lisa; Rajan, Abhijith; Rameau, Julien; Rantakyrö, Fredrik T.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane; Wolff, Schuyler

    2017-06-01

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  20. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  1. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Science.gov (United States)

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  2. A Perceptual Matching Technique for Depth Judgements in Optical, See-Through Augmented Reality

    Science.gov (United States)

    2006-03-01

    a perceptual matching task. Figure 1 shows the experimental setting. We seated observers on a tall stool 3.4 meters from one end of a 50.1-meter...center of each lens was 147.3 cm above the floor, and we adjusted the height of the stool so that observers could comforta- bly look through the display...234–250. [15] MON-WILLIAMS, M, TRESILIAN, JR, “Ordinal depth information from accommodation?”, Ergonomics , 43(3), March 2000, pages 391–404. [16

  3. Approximate Matching as a Key Technique in Organization of Natural and Artificial Intelligence

    Science.gov (United States)

    Mack, Marilyn; Lapir, Gennadi M.; Berkovich, Simon

    2000-01-01

    The basic property of an intelligent system, natural or artificial, is "understanding". We consider the following formalization of the idea of "understanding" among information systems. When system I issues a request to system 2, it expects a certain kind of desirable reaction. If such a reaction occurs, system I assumes that its request was "understood". In application to simple, "push-button" systems the situation is trivial because in a small system the required relationship between input requests and desired outputs could be specified exactly. As systems grow, the situation becomes more complex and matching between requests and actions becomes approximate.

  4. Matching Heterogenous Open Innovation Strategies with Business Model Dimensions

    OpenAIRE

    Saebi, Tina; Foss, Nicolai Juul

    2014-01-01

    Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies' business models are not attuned to open strategies. Accordingly, we propose a contingency model of open business models by systematically linking open innovation strategies to core business model dimensions, notably the content, structure, and governance of transactions. We further illustrate a c...

  5. Model-based shape matching of orthopaedic implants in RSA and fluoroscopy

    NARCIS (Netherlands)

    Prins, Anne Hendrik

    2015-01-01

    Model-based shape matching is commonly used, for example to measure the migration of an implant with Roentgen stereophotogrammetric analysis (RSA) or to measure implant kinematics with fluoroscopy. The aim of this thesis was to investigate the general usability of shape matching and to improve the r

  6. The Effectiveness of Contingency Model Training: A Review of the Validation of LEADER MATCH.

    Science.gov (United States)

    Fiedler, Fred E.; Mahar, Linda

    1979-01-01

    Twelve studies are reviewed which tested the effectiveness of LEADER MATCH, a new leadership training method based on Fiedler's Contingency Model. The performance evaluations of 423 trained leaders were compared to those of 484 controls. All studies yielded statistically significant results supporting LEADER MATCH training. (Editor/SJL)

  7. A Two-Sided Matching Decision Model Based on Uncertain Preference Sequences

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    2015-01-01

    Full Text Available Two-sided matching is a hot issue in the field of operation research and decision analysis. This paper reviews the typical two-sided matching models and their limitations in some specific contexts, and then puts forward a new decision model based on uncertain preference sequences. In this model, we first design a data processing method to get preference ordinal value in uncertain preference sequence, then compute the preference distance of each matching pair based on these certain preference ordinal values, set the optimal objectives as maximizing matching number and minimizing total sum of preference distances of all the matching pairs under the lowest threshold constraint of matching effect, and then solve it with branch-and-bound algorithm. Meanwhile, we take two numeral cases as examples and analyze the different matching solutions with one-norm distance, two-norm distance, and positive-infinity-norm distance, respectively. We also compare our decision model with two other approaches, and summarize their characteristics on two-sided matching.

  8. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data a

  9. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data a

  10. A General Epipolar-Line Model between Optical and SAR Images and Used in Image Matching

    Directory of Open Access Journals (Sweden)

    Shuai Xing

    2014-02-01

    Full Text Available The search space and strategy are important for optical and SAR image matching. In this paper a general epipolar-line model has been proposed between linear array push-broom optical and SAR images. Then a dynamic approximate epipolar-line constraint model (DAELCM has been constructed and used to construct a new image matching algorithm with Harris operator and CRA. Experimental results have shown that the general epipolar-line model is valid and successfully used in optical and SAR image matching, and effectively limits the search space and decreased computation.

  11. A Matched Filter Technique for Slow Radio Transient Detection and First Demonstration with the Murchison Widefield Array

    Science.gov (United States)

    Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.

    2017-03-01

    Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ∼20 and 200 mJy, providing the best limits to date for hour- and month-long transients.

  12. Fast Adaptation in Generative Models with Generative Matching Networks

    OpenAIRE

    Bartunov, Sergey; Vetrov, Dmitry P.

    2016-01-01

    Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same con...

  13. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    Science.gov (United States)

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  14. Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System

    Directory of Open Access Journals (Sweden)

    K. R. Uthayan

    2015-01-01

    Full Text Available Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  15. A Fast Block-Matching Algorithm Using Smooth Motion Vector Field Adaptive Search Technique

    Institute of Scientific and Technical Information of China (English)

    LI Bo(李波); LI Wei(李炜); TU YaMing(涂亚明)

    2003-01-01

    In many video standards based on inter-frame compression such as H.26x and MPEG, block-matching algorithm has been widely adopted as the method for motion estimation because of its simplicity and effectiveness. Nevertheless, since motion estimation is very complex in computing. Fast algorithm for motion estimation has always been an important and attractive topic in video compression. From the viewpoint of making motion vector field smoother, this paper proposes a new algorithm SMVFAST. On the basis of motion correlation, it predicts the starting point by neighboring motion vectors according to their SADs. Adaptive search modes are usedin its search process through simply classifying motion activity. After discovering the ubiquitous ratio between the SADs of the collocated blocks in the consecutive frames, the paper proposes an effective half-stop criterion that can quickly stop the search process with good enough results.Experiments show that SMVFAST obtains almost the same results as the full search at very low computation cost, and outperforms MVFAST and PMVFAST in speed and quality, which are adopted by MPEG-4.

  16. Radar seeker based autonomous navigation update system using topography feature matching techniques

    Science.gov (United States)

    Lerche, H. D.; Tumbreagel, F.

    1992-11-01

    The discussed navigation update system was designed for an unmanned platform with fire and forget capability. It meets the requirement due to fully autonomous operation. The system concept will be characterized by complementary use of the radar seeker for target identification as well as for navigation function. The system works in the navigation mode during preprogrammable phases where the primary target identification function is not active or in parallel processing. The dual function radar seeker system navigates the drone during the midcourse and terminal phases of the mission. Its high resolution due to range measurement and doppler beam sharpening in context with its radar reflectivity sensing capability are the basis for topography referenced navigation computation. The detected height jumps (coming from terrain elevation and cultural objects) and radar reflectivity features will be matched together with topography referenced features. The database comprises elevation data and selected radar reflectivity features that are robust against seasonal influences. The operational benefits of the discussed system are as follows: (1) the improved navigation performance with high probability of position fixing, even over flat terrain; (2) the operation within higher altitudes; and (3) bad weather capability. The developed software modules were verified with captive flight test data running in a hardware-in-the-loop simulation.

  17. Profile-Matching Techniques for On-Demand Software Management in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Falko Dressler

    2007-03-01

    Full Text Available The heterogeneity and dynamics in terms of hardware and software configurations are steadily increasing in wireless sensor networks (WSNs. Therefore, software management is becoming one of the most prominent challenges in this domain. This applies especially for on-demand updates for improved redundancy or adaptive task allocation. Methodologies for efficient software management in WSN need to be investigated for operating and maintaining large-scale sensor networks. We developed a profile-based software management scheme that consists of a dynamic profile-matching algorithm to identify current hardware and software configurations, an on-demand code generation module, and mechanisms for dynamic network-centric reprogramming of sensor nodes. We exploit the advantages of robot-based reconfiguration and reprogramming methods for efficient and secure software management. The mobile robot system is employed for decision processes and to store the source code repository. The developed methods are depicted in detail. Additionally, we demonstrate the applicability and advantages based on a scenario that we implemented in our lab.

  18. Profile-Matching Techniques for On-Demand Software Management in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dressler Falko

    2007-01-01

    Full Text Available The heterogeneity and dynamics in terms of hardware and software configurations are steadily increasing in wireless sensor networks (WSNs. Therefore, software management is becoming one of the most prominent challenges in this domain. This applies especially for on-demand updates for improved redundancy or adaptive task allocation. Methodologies for efficient software management in WSN need to be investigated for operating and maintaining large-scale sensor networks. We developed a profile-based software management scheme that consists of a dynamic profile-matching algorithm to identify current hardware and software configurations, an on-demand code generation module, and mechanisms for dynamic network-centric reprogramming of sensor nodes. We exploit the advantages of robot-based reconfiguration and reprogramming methods for efficient and secure software management. The mobile robot system is employed for decision processes and to store the source code repository. The developed methods are depicted in detail. Additionally, we demonstrate the applicability and advantages based on a scenario that we implemented in our lab.

  19. Matching Index-of-Refraction for 3D Printing Model Using Mixture of Herb Essential Oil and Light Mineral Oil

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2013-10-15

    This study has extensively investigated the emerging 3-D printing technologies for use of MIR-based flow field visualization methods such as PIV and LDV. As a result, mixture of Herb essential oil and light mineral oil has been evaluated to be great working fluid due to its adequate properties. Using this combination, the RIs between 1.45 and 1.55 can be accurately matched, and most of the transparent materials are found to be ranged in here. Conclusively, the proposed MIR method are expected to provide large flexibility of model materials and geometries for laser based optical measurements. Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV) are the two major optical technologies used for flow field visualization in the latest fundamental thermal-hydraulics researches. Those techniques seriously require minimizing optical distortions for enabling high quality data. Therefore, matching index of refraction (MIR) between model materials and working fluids are an essential part of minimizing measurement uncertainty. This paper proposes to use 3-D Printing technology for manufacturing models for the MIR-based optical measurements. Because of the large flexibility in geometries and materials of the 3-D Printing, its application is obviously expected to provide tremendous advantages over the traditional MIR-based optical measurements. This study focuses on the 3-D printing models and investigates their optical properties, transparent printing techniques, and index-matching fluids.

  20. On a two-dimensional mode-matching technique for sound generation and transmission in axial-flow outlet guide vanes

    Science.gov (United States)

    Bouley, Simon; François, Benjamin; Roger, Michel; Posson, Hélène; Moreau, Stéphane

    2017-09-01

    The present work deals with the analytical modeling of two aspects of outlet guide vane aeroacoustics in axial-flow fan and compressor rotor-stator stages. The first addressed mechanism is the downstream transmission of rotor noise through the outlet guide vanes, the second one is the sound generation by the impingement of the rotor wakes on the vanes. The elementary prescribed excitation of the stator is an acoustic wave in the first case and a hydrodynamic gust in the second case. The solution for the response of the stator is derived using the same unified approach in both cases, within the scope of a linearized and compressible inviscid theory. It is provided by a mode-matching technique: modal expressions are written in the various sub-domains upstream and downstream of the stator as well as inside the inter-vane channels, and matched according to the conservation laws of fluid dynamics. This quite simple approach is uniformly valid in the whole range of subsonic Mach numbers and frequencies. It is presented for a two-dimensional rectilinear-cascade of zero-staggered flat-plate vanes and completed by the implementation of a Kutta condition. It is then validated in sound generation and transmission test cases by comparing with a previously reported model based on the Wiener-Hopf technique and with reference numerical simulations. Finally it is used to analyze the tonal rotor-stator interaction noise in a typical low-speed fan architecture. The interest of the mode-matching technique is that it could be easily transposed to a three-dimensional annular cascade in cylindrical coordinates in a future work. This makes it an attractive alternative to the classical strip-theory approach.

  1. Refractive-index-matched hydrogel materials for modeling flow-structure interactions

    CERN Document Server

    Byron, Margaret L

    2012-01-01

    In imaging-based studies of flow around solid objects, it is useful to have materials that are refractive-index-matched to the surrounding fluid. However, materials currently in use are usually rigid and matched to liquids that are either expensive or highly viscous. This does not allow for measurements at high Reynolds number, nor accurate modeling of flexible structures. This work explores the use of two hydrogels (agarose and polyacrylamide) as refractive-index-matched models in water. These hydrogels are inexpensive, can be cast into desired shapes, and have flexibility that can be tuned to match biological materials. The use of water as the fluid phase allows this method to be implemented immediately in many experimental facilities and permits investigation of high Reynolds number phenomena. We explain fabrication methods and present a summary of the physical and optical properties of both gels, and then show measurements demonstrating the use of hydrogel models in quantitative imaging.

  2. Production Efficiency and Market Orientation in Food Crops in North West Ethiopia: Application of Matching Technique for Impact Assessment.

    Directory of Open Access Journals (Sweden)

    Habtamu Yesigat Ayenew

    Full Text Available Agricultural technologies developed by national and international research institutions were not benefiting the rural population of Ethiopia to the extent desired. As a response, integrated agricultural extension approaches are proposed as a key strategy to transform the smallholder farming sector. Improving Productivity and Market Success (IPMS of Ethiopian Farmers project is one of the development projects initiated by integrating productivity enhancement technological schemes with market development model. This paper explores the impact of the project intervention in the smallholder farmers' wellbeing.To test the research hypothesis of whether the project brought a significant change in the input use, marketed surplus, efficiency and income of farm households, we use a cross-section data from 200 smallholder farmers in Northwest Ethiopia, collected through multi-stage sampling procedure. To control for self-selection from observable characteristics of the farm households, we employ Propensity Score Matching (PSM. We finally use Data Envelopment Analysis (DEA techniques to estimate technical efficiency of farm households.The outcome of the research is in line with the premises that the participation of the household in the IPMS project improves purchased input use, marketed surplus, efficiency of farms and the overall gain from farming. The participant households on average employ more purchased agricultural inputs and gain higher gross margin from the production activities as compared to the non-participant households. The non-participant households on average supply less output (measured both in monetary terms and proportion of total produce to the market as compared to their participant counterparts. Except for the technical efficiency of production in potato, project participant households are better-off in production efficiency compared with the non-participant counterparts.We verified the idea that Improving Productivity and Market

  3. Production Efficiency and Market Orientation in Food Crops in North West Ethiopia: Application of Matching Technique for Impact Assessment

    Science.gov (United States)

    Ayenew, Habtamu Yesigat

    2016-01-01

    Introduction Agricultural technologies developed by national and international research institutions were not benefiting the rural population of Ethiopia to the extent desired. As a response, integrated agricultural extension approaches are proposed as a key strategy to transform the smallholder farming sector. Improving Productivity and Market Success (IPMS) of Ethiopian Farmers project is one of the development projects initiated by integrating productivity enhancement technological schemes with market development model. This paper explores the impact of the project intervention in the smallholder farmers’ wellbeing. Methods To test the research hypothesis of whether the project brought a significant change in the input use, marketed surplus, efficiency and income of farm households, we use a cross-section data from 200 smallholder farmers in Northwest Ethiopia, collected through multi-stage sampling procedure. To control for self-selection from observable characteristics of the farm households, we employ Propensity Score Matching (PSM). We finally use Data Envelopment Analysis (DEA) techniques to estimate technical efficiency of farm households. Results The outcome of the research is in line with the premises that the participation of the household in the IPMS project improves purchased input use, marketed surplus, efficiency of farms and the overall gain from farming. The participant households on average employ more purchased agricultural inputs and gain higher gross margin from the production activities as compared to the non-participant households. The non-participant households on average supply less output (measured both in monetary terms and proportion of total produce) to the market as compared to their participant counterparts. Except for the technical efficiency of production in potato, project participant households are better-off in production efficiency compared with the non-participant counterparts. Conclusion We verified the idea that Improving

  4. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  5. A cascaded charge-sharing technique for an EDP-efficient match-line design in CAMs

    Institute of Scientific and Technical Information of China (English)

    Zhang Jianwei; Ye Yizheng; Liu Binda; Lan Jinbao

    2009-01-01

    A novel cascaded charge-sharing technique is presented in content-addressable memories (CAMs), which not only effectively reduces the match-line (ML) power by using a pre-select circuit, but also realizes a high search speed. Pre-layout simulation results show a 75.9% energy-delay-product (EDP) reduction of the MLs over the traditional precharge-high ML scheme and 41.3% over the segmented ML method. Based on this technique, a test-chip of 64-word × 144-bit ternary CAM (TCAM) is implemented using a 0.18-μm 1.8-V CMOS process, achieving an 1.0 ns search delay and 4.81 fJ/bit/search for the MLs.

  6. Modelling of a novel high-impedance matching layer for high frequency (>30 MHz) ultrasonic transducers.

    Science.gov (United States)

    Qian, Y; Harris, N R

    2014-02-01

    This work describes a new approach to impedance matching for ultrasonic transducers. A single matching layer with high acoustic impedance of 16 MRayls is demonstrated to show a bandwidth of around 70%, compared with conventional single matching layer designs of around 50%. Although as a consequence of this improvement in bandwidth, there is a loss in sensitivity, this is found to be similar to an equivalent double matching layer design. Designs are calculated by using the KLM model and are then verified by FEA simulation, with very good agreement Considering the fabrication difficulties encountered in creating a high-frequency double matched design due to the requirement for materials with specific acoustic impedances, the need to accurately control the thickness of layers, and the relatively narrow bandwidths available for conventional single matched designs, the new approach shows advantages in that alternative (and perhaps more practical) materials become available, and offers a bandwidth close to that of a double layer design with the simplicity of a single layer design. The disadvantage is a trade-off in sensitivity. A typical example of a piezoceramic transducer matched to water can give a 70% fractional bandwidth (comparable to an ideal double matched design of 72%) with a 3dB penalty in insertion loss.

  7. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing.

    Science.gov (United States)

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-06-22

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  8. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  9. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  10. Modeling the growth of fingerprints improves matching for adolescents

    CERN Document Server

    Gottschlich, Carsten; Lorenz, Robert; Bernhardt, Stefanie; Hantschel, Michael; Munk, Axel

    2010-01-01

    We study the effect of growth on the fingerprints of adolescents, based on which we suggest a simple method to adjust for growth when trying to recover a juvenile's fingerprint in a database years later. Based on longitudinal data sets in juveniles' criminal records, we show that growth essentially leads to an isotropic rescaling, so that we can use the strong correlation between growth in stature and limbs to model the growth of fingerprints proportional to stature growth as documented in growth charts. The proposed rescaling leads to a 72% reduction of the distances between corresponding minutiae for the data set analyzed. These findings were corroborated by several verification tests. In an identification test on a database containing 3.25 million right index fingers at the Federal Criminal Police Office of Germany, the identification error rate of 20.8% was reduced to 2.1% by rescaling. The presented method is of striking simplicity and can easily be integrated into existing automated fingerprint identifi...

  11. Fabrication of polystyrene hollow microspheres as laser fusion targets by optimized density-matched emulsion technique and characterization

    Indian Academy of Sciences (India)

    K K Mishra; R K Khardekar; Rashmi Singh; H C Pant

    2002-07-01

    Inertial confinement fusion, frequently referred to as ICF, inertial fusion, or laser fusion, is a means of producing energy by imploding small hollow microspheres containing thermonuclear fusion fuel. Polymer microspheres, which are used as fuel containers, can be produced by solution-based micro-encapsulation technique better known as density-matched emulsion technique. The specifications of these microspheres are very rigorous, and various aspects of the emulsion hydrodynamics associated with their production are important in controlling the final product. This paper describes about the optimization of various parameters associated with density-matched emulsion method in order to improve the surface smoothness, wall thickness uniformity and sphericity of hollow polymer microspheres. These polymer microshells have been successfully fabricated in our lab, with 3–30 m wall thickness and 50–1600 m diameters. The sphericity and wall thickness uniformity are better than 99%. Elimination of vacuoles and high yield rate has been achieved by adopting the step-wise heating of W1/O/W2 emulsion for solvent removal.

  12. Minutia Cylinder-Code: a new representation and matching technique for fingerprint recognition.

    Science.gov (United States)

    Cappelli, Raffaele; Ferrara, Matteo; Maltoni, Davide

    2010-12-01

    In this paper, we introduce the Minutia Cylinder-Code (MCC): a novel representation based on 3D data structures (called cylinders), built from minutiae distances and angles. The cylinders can be created starting from a subset of the mandatory features (minutiae position and direction) defined by standards like ISO/IEC 19794-2 (2005). Thanks to the cylinder invariance, fixed-length, and bit-oriented coding, some simple but very effective metrics can be defined to compute local similarities and to consolidate them into a global score. Extensive experiments over FVC2006 databases prove the superiority of MCC with respect to three well-known techniques and demonstrate the feasibility of obtaining a very effective (and interoperable) fingerprint recognition implementation for light architectures.

  13. Adaptive technique for matching the spectral response in skin lesions' images

    Science.gov (United States)

    Pavlova, P.; Borisova, E.; Pavlova, E.; Avramov, L.

    2015-03-01

    The suggested technique is a subsequent stage for data obtaining from diffuse reflectance spectra and images of diseased tissue with a final aim of skin cancer diagnostics. Our previous work allows us to extract patterns for some types of skin cancer, as a ratio between spectra, obtained from healthy and diseased tissue in the range of 380 - 780 nm region. The authenticity of the patterns depends on the tested point into the area of lesion, and the resulting diagnose could also be fixed with some probability. In this work, two adaptations are implemented to localize pixels of the image lesion, where the reflectance spectrum corresponds to pattern. First adapts the standard to the personal patient and second - translates the spectrum white point basis to the relative white point of the image. Since the reflectance spectra and the image pixels are regarding to different white points, a correction of the compared colours is needed. The latest is done using a standard method for chromatic adaptation. The technique follows the steps below: -Calculation the colorimetric XYZ parameters for the initial white point, fixed by reflectance spectrum from healthy tissue; -Calculation the XYZ parameters for the distant white point on the base of image of nondiseased tissue; -Transformation the XYZ parameters for the test-spectrum by obtained matrix; -Finding the RGB values of the XYZ parameters for the test-spectrum according sRGB; Finally, the pixels of the lesion's image, corresponding to colour from the test-spectrum and particular diagnostic pattern are marked with a specific colour.

  14. A matched-pair comparison between bilateral intrafascial and interfascial nerve-sparing techniques in extraperitoneal laparoscopic radical prostatectomy

    Institute of Scientific and Technical Information of China (English)

    Tao Zheng; Xu Zhang; Xin Ma; Hong-Zhao Li; Jiang-Pin Gao; Wei Cai; Jun Dong

    2013-01-01

    The aim of this study was to validate the advantages of the intrafascial nerve-sparing technique compared with the interfascial nerve-sparing technique in extraperitoneal laparoscopic radical prostatectomy.From March 2010 to August 2011,65 patients with localized prostate cancer (PCa) underwent bilateral intrafascial nerve-sparing extraperitoneal laparoscopic radical prostatectomy.These patients were matched in a 1:2 ratio to 130 patients with localized PCa who had undergone bilateral interfascial nerve-sparing extraperitoneal laparoscopic radical prostatectomy between January 2008 and August 2011.Operative data and oncological and functional results of both groups were compared.There was no difference in operative data,pathological stages and overall rates of positive surgical margins between the groups.There were 9 and 13 patients lost to follow-up in the intrafascial group and interfascial group,respectively.The intrafascial technique provided earlier recovery of continence at both 3 and 6 months than the interfascial technique.Equal results in terms of continence were found in both groups at 12 months.Better rates of potency at 6 months and 12 months were found in younger patients (age ≤65 years) and overall patients who had undergone the intrafascial nerve-sparing extraperitoneal laparoscopic radical prostatectomy.Biochemical progression-free survival rates 1 year postoperatively were similar in both groups.Using strict indications,compared with the interfascial nerve-sparing technique,the intrafascial technique provided similar operative outcomes and short-term oncological results,quicker recovery of continence and better potency.The intrafascial nerve-sparing technique is recommended as a preferred approach for young PCa patients who are clinical stages cT1 to cT2a and have normal preoperative Potency.

  15. Color Matching for Fiber Blends Based on Stearns-Noechel Model

    Institute of Scientific and Technical Information of China (English)

    LI Rong; SONG Yang; GU Feng

    2006-01-01

    Prediction of the formula for matching a given color standard by blending pre-dyed fibers is of considerable importance to the textile industry. This kind of formulation suffers from a lack of computer-aided tool to assist the colorist attempting to find a good recipe to reproduce a target color. In this article a tristimulus color matching algorithm based on Stearns-Noechel model is proposed. This algorithm was run to predict recipes for 36 viscose blends. The maximum color difference is 0.97 CIELAB units. It is demonstrated that the algorithm can be used in color matching of fiber blends.

  16. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    -secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts...... concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models...

  17. Wages, Training, and Job Turnover in a Search-Matching Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Nielsen, Michael Svarer

    1999-01-01

    In this paper we extend a job search-matching model with firm-specific investments in training developed by Mortensen (1998) to allow for different offer arrival rates in employment and unemployment. The model by Mortensen changes the original wage posting model (Burdett and Mortensen, 1998) in two...... aspects. First, it provides a link between the wage posting framework and the search-matching framework (eg. Pissarides, 1990). Second, it improves the correspondence between the theoretical characterization of the endogeneously derived earnings density and the empirically observed earnings density. We...

  18. A high-resolution processing technique for improving the energy of weak signal based on matching pursuit

    Directory of Open Access Journals (Sweden)

    Shuyan Wang

    2016-05-01

    Full Text Available This paper proposes a new method to improve the resolution of the seismic signal and to compensate the energy of weak seismic signal based on matching pursuit. With a dictionary of Morlet wavelets, matching pursuit algorithm can decompose a seismic trace into a series of wavelets. We abstract complex-trace attributes from analytical expressions to shrink the search range of amplitude, frequency and phase. In addition, considering the level of correlation between constituent wavelets and average wavelet abstracted from well-seismic calibration, we can obtain the search range of scale which is an important adaptive parameter to control the width of wavelet in time and the bandwidth of frequency. Hence, the efficiency of selection of proper wavelets is improved by making first a preliminary estimate and refining a local selecting range. After removal of noise wavelets, we integrate useful wavelets which should be firstly executed by adaptive spectral whitening technique. This approach can improve the resolutions of seismic signal and enhance the energy of weak wavelets simultaneously. The application results of real seismic data show this method has a good perspective of application.

  19. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  20. Modeling and analysis of local comprehensive minutia relation for fingerprint matching.

    Science.gov (United States)

    He, Xiaoguang; Tian, Jie; Li, Liang; He, Yuliang; Yang, Xin

    2007-10-01

    This paper introduces a robust fingerprint matching scheme based on the comprehensive minutia and the binary relation between minutiae. In the method, a fingerprint is represented as a graph, of which the comprehensive minutiae act as the vertex set and the local binary minutia relations provide the edge set. Then, the transformation-invariant and transformation-variant features are extracted from the binary relation. The transformation-invariant features are suitable to estimate the local matching probability, whereas the transformation-variant features are used to model the fingerprint rotation transformation with the adaptive Parzen window. Finally, the fingerprint matching is conducted with the variable bounded box method and iterative strategy. The experiments demonstrate that the proposed scheme is effective and robust in fingerprint alignment and matching.

  1. A 12-bit 40-MS/s SHA-less pipelined ADC using a front-end RC matching technique

    Energy Technology Data Exchange (ETDEWEB)

    Fan Mngjun; Ren Junyan; Shu Guanghua; Li Ning; Ye Fan; Xu Jun [State Key Laboratory of ASIC and Systems, Fudan University, Shanghai 201203 (China); Guo Yao, E-mail: 052052003@fudan.edu.cn [The Media Tek Inc., Beijing 100080 (China)

    2011-01-15

    A 12-Bit 40-MS/s pipelined analog-to-digital converter (ADC) incorporates a front-end RC constant matching technique and a set of front-end timing with different duty cycle that are beneficial for enhancing linearity in SHA-less architecture without tedious verification in back-end layout simulation. Employing SHA-less, opampsharing and low-power opamps for low dissipation and low cost, designed in 0.13-{mu}m CMOS technology, the prototype digitizes a 10.2-MHz input with 78.2-dB of spurious free dynamic range, 60.5-dB of signal-to-noise-and-distortion ratio, and -75.5-dB of total harmonic distortion (the first 5 harmonics included) while consuming 15.6-mW from a 1.2-V supply. (semiconductor integrated circuits)

  2. A 12-bit 40-MS/s SHA-less pipelined ADC using a front-end RC matching technique*

    Institute of Scientific and Technical Information of China (English)

    Fan Mingjun; Ren Junyan; Shu Guanghua; Guo Yao; Li Ning; Ye Fan; Xu Jun

    2011-01-01

    A 12-Bit 40-MS/s pipelined analog-to-digital converter (ADC) incorporates a front-end RC constant matching technique and a set of front-end timing with different duty cycle that are beneficial for enhancing linearity in SHA-less architecture without tedious verification in back-end layout simulation. Employing SHA-less, opampsharing and low-power opamps for low dissipation and low cost, designed in 0.13-μm CMOS technology, the prototype digitizes a 10.2-MHz input with 78.2-dB of spurious free dynamic range, 60.5-dB of signal-to-noiseand-distortion ratio, and -75.5-dB of total harmonic distortion (the first 5 harmonics included) while consuming 15.6-mW from a 1.2-V supply.

  3. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    Science.gov (United States)

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Multicollinearity in associations between multiple environmental features and body weight and abdominal fat: using matching techniques to assess whether the associations are separable.

    Science.gov (United States)

    Leal, Cinira; Bean, Kathy; Thomas, Frédérique; Chaix, Basile

    2012-06-01

    Because of the strong correlations among neighborhoods' characteristics, it is not clear whether the associations of specific environmental exposures (e.g., densities of physical features and services) with obesity can be disentangled. Using data from the RECORD (Residential Environment and Coronary Heart Disease) Cohort Study (Paris, France, 2007-2008), the authors investigated whether neighborhood characteristics related to the sociodemographic, physical, service-related, and social-interactional environments were associated with body mass index and waist circumference. The authors developed an original neighborhood characteristic-matching technique (analyses within pairs of participants similarly exposed to an environmental variable) to assess whether or not these associations could be disentangled. After adjustment for individual/neighborhood socioeconomic variables, body mass index/waist circumference was negatively associated with characteristics of the physical/service environments reflecting higher densities (e.g., proportion of built surface, densities of shops selling fruits/vegetables, and restaurants). Multiple adjustment models and the neighborhood characteristic-matching technique were unable to identify which of these neighborhood variables were driving the associations because of high correlations between the environmental variables. Overall, beyond the socioeconomic environment, the physical and service environments may be associated with weight status, but it is difficult to disentangle the effects of strongly correlated environmental dimensions, even if they imply different causal mechanisms and interventions.

  5. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  6. The analysis of applicability of the refractive-index-matching method for flow investigation by LDA method in models of the fire chambers of complex geometry

    Science.gov (United States)

    Rakhmanov, Vitaly V.; Kulikov, Dmitry V.

    2014-08-01

    Possibility of use of a refractive-index-matching method for flow investigation by LDA method in models of the fire chambers of complex geometry is shown. The technique of flows investigation by LDA method is developed. The given technique can be successfully applied in leading branches of a thermal and hydropower engineering, in case of need of flows diagnostics in models of devices with the complex geometry.

  7. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically......This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... indistinguishable from the training image....

  8. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  9. Revisiting Hartle's model using perturbed matching theory to second order: amending the change in mass

    CERN Document Server

    Reina, Borja

    2014-01-01

    Hartle's model describes the equilibrium configuration of a rotating isolated compact body in perturbation theory up to second order in General Relativity. The interior of the body is a perfect fluid with a barotropic equation of state, no convective motions and rigid rotation. That interior is matched across its surface to an asymptotically flat vacuum exterior. Perturbations are taken to second order around a static and spherically symmetric background configuration. Apart from the explicit assumptions, the perturbed configuration is constructed upon some implicit premises, in particular the continuity of the functions describing the perturbation in terms of some background radial coordinate. In this work we revisit the model within a modern general and consistent theory of perturbative matchings to second order, which is independent of the coordinates and gauges used to describe the two regions to be joined. We explore the matching conditions up to second order in full. The main particular result we presen...

  10. Infrared fixed point of the 12-fermion SU(3) gauge model based on 2-lattice MCRG matching

    CERN Document Server

    Hasenfratz, Anna

    2011-01-01

    I investigate an SU(3) gauge model with 12 fundamental fermions. The physically interesting region of this strongly coupled system can be influenced by an ultraviolet fixed point due to lattice artifacts. I suggest to use a gauge action with an additional negative adjoint plaquette term that lessens this problem. I also introduce a new analysis method for the 2-lattice matching Monte Carlo renormalization group technique that significantly reduces finite volume effects. The combination of these two improvements allows me to measure the bare step scaling function in a region of the gauge coupling where it is clearly negative, indicating a positive renormalization group $\\beta$ function and infrared conformality.

  11. An Evaluation of Latent Growth Models for Propensity Score Matched Groups

    Science.gov (United States)

    Leite, Walter L.; Sandbach, Robert; Jin, Rong; MacInnes, Jann W.; Jackman, M. Grace-Anne

    2012-01-01

    Because random assignment is not possible in observational studies, estimates of treatment effects might be biased due to selection on observable and unobservable variables. To strengthen causal inference in longitudinal observational studies of multiple treatments, we present 4 latent growth models for propensity score matched groups, and…

  12. Towards an integrated workflow for structural reservoir model updating and history matching

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Peters, E.; Wilschut, F.

    2011-01-01

    A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by

  13. The match-mismatch model of emotion processing styles and emotion regulation strategies in fibromyalgia.

    NARCIS (Netherlands)

    Geenen, R.; Ooijen-van der Linden, L. van; Lumley, M.A.; Bijlsma, J.W.J.; Middendorp, H. van

    2012-01-01

    OBJECTIVE: Individuals differ in their style of processing emotions (e.g., experiencing affects intensely or being alexithymic) and their strategy of regulating emotions (e.g., expressing or reappraising). A match-mismatch model of emotion processing styles and emotion regulation strategies is propo

  14. Towards an integrated workflow for structural reservoir model updating and history matching

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Peters, E.; Wilschut, F.

    2011-01-01

    A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by

  15. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  16. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  17. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian K.; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  18. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  19. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  20. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    Science.gov (United States)

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  1. Can pair-instability supernova models match the observations of superluminous supernovae?

    CERN Document Server

    Kozyreva, Alexandra

    2015-01-01

    An increasing number of so-called superluminous supernovae (SLSNe) are discovered. It is believed that at least some of them with slowly fading light curves originate in stellar explosions induced by the pair instability mechanism. Recent stellar evolution models naturally predict pair instability supernovae (PISNe) from very massive stars at wide range of metallicities (up to Z=0.006, Yusof et al. 2013). In the scope of this study we analyse whether PISN models can match the observational properties of SLSNe with various light curve shapes. Specifically, we explore the influence of different degrees of macroscopic chemical mixing in PISN explosive products on the resulting observational properties. We artificially apply mixing to the 250 Msun PISN evolutionary model from Kozyreva et al. (2014) and explore its supernova evolution with the one-dimensional radiation hydrodynamics code STELLA. The greatest success in matching SLSN observations is achieved in the case of an extreme macroscopic mixing, where all r...

  2. Output feedback model matching in linear impulsive systems with control feedthrough: a structural approach

    Science.gov (United States)

    Zattoni, Elena

    2017-01-01

    This paper investigates the problem of structural model matching by output feedback in linear impulsive systems with control feedthrough. Namely, given a linear impulsive plant, possibly featuring an algebraic link from the control input to the output, and given a linear impulsive model, the problem consists in finding a linear impulsive regulator that achieves exact matching between the respective forced responses of the linear impulsive plant and of the linear impulsive model, for all the admissible input functions and all the admissible sequences of jump times, by means of a dynamic feedback of the plant output. The problem solvability is characterized by a necessary and sufficient condition. The regulator synthesis is outlined through the proof of sufficiency, which is constructive.

  3. Application of nonlinear color matching model to four-color ink-jet printing

    Institute of Scientific and Technical Information of China (English)

    苏小红; 张田文; 郭茂祖; 王亚东

    2002-01-01

    Through discussing the color-matching technology and its application in printing industry the conven-tional approaches commonly used in color-matching, and the difficulties in color-matching, a nonlinear colormatching model based on two-step learning is established by finding a linear model by learning pure-color datafirst and then a nonlinear modification model by learning mixed-color data. Nonlinear multiple-regression isused to fit the parameters of the modification model. Nonlinear modification function is discovered by BACONsystem by learning mixture data. Experiment results indicate that nonlinear color conversion by two-step learningcan further improve the accuracy when it is used for straightforward conversion from RGB to CMYK. An im-proved separation model based on GCR concept is proposed to solve the problem of gray balance and it can beused for three-to four-color conversion as well. The method proposed has better learning ability and faster print-ing speed than other historical approaches when it is applied to four-color ink-jet printing.

  4. Multiview road sign detection via self-adaptive color model and shape context matching

    Science.gov (United States)

    Liu, Chunsheng; Chang, Faliang; Liu, Chengyun

    2016-09-01

    The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.

  5. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  6. Dynamic force matching: Construction of dynamic coarse-grained models with realistic short time dynamics and accurate long time dynamics

    Science.gov (United States)

    Davtyan, Aram; Voth, Gregory A.; Andersen, Hans C.

    2016-12-01

    We recently developed a dynamic force matching technique for converting a coarse-grained (CG) model of a molecular system, with a CG potential energy function, into a dynamic CG model with realistic dynamics [A. Davtyan et al., J. Chem. Phys. 142, 154104 (2015)]. This is done by supplementing the model with additional degrees of freedom, called "fictitious particles." In that paper, we tested the method on CG models in which each molecule is coarse-grained into one CG point particle, with very satisfactory results. When the method was applied to a CG model of methanol that has two CG point particles per molecule, the results were encouraging but clearly required improvement. In this paper, we introduce a new type (called type-3) of fictitious particle that exerts forces on the center of mass of two CG sites. A CG model constructed using type-3 fictitious particles (as well as type-2 particles previously used) gives a much more satisfactory dynamic model for liquid methanol. In particular, we were able to construct a CG model that has the same self-diffusion coefficient and the same rotational relaxation time as an all-atom model of liquid methanol. Type-3 particles and generalizations of it are likely to be useful in converting more complicated CG models into dynamic CG models.

  7. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  8. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  9. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  10. Surface layer independent model fitting by phase matching: theory and application to HD 49933 and HD 177153 (aka Perky)

    Science.gov (United States)

    Roxburgh, Ian W.

    2015-01-01

    Aims: Our aim is to describe the theory of surface layer independent model fitting by phase matching and to apply this to the stars HD 49933 observed by CoRoT, and HD 177153 (aka Perky) observed by Kepler. Methods: We use theoretical analysis, phase shifts, and model fitting. Results: We define the inner and outer phase shifts of a frequency set of a model star and show that the outer phase shifts are (almost) independent of degree ℓ, and that a function of the inner phase shifts (the phase function) collapses to an ℓ independent function of frequency in the outer layers. We then show how to use this result in a model fitting technique to find a best fit model to an observed frequency set by calculating the inner phase shifts of a model using the observed frequencies and determining the extent to which the phase function collapses to a single function of frequency in the outer layers. This technique does not depend on the radial order n assigned to the observed frequencies. We give two examples applying this technique to the frequency sets of HD 49933 observed by CoRoT and HD 177153 (aka Perky) observed by Kepler, for which measurements of angular diameters and bolometric fluxes are available. For HD 49933 we find a very wide range of models to be consistent with the data (all with convective core overshooting) - and conclude that the data is not precise enough to make any useful restrictions on the structure of this star. For HD 177153 our best fit models have no convective cores, masses in the range 1.15-1.17 M⊙, ages of 4.45-4.70 × 109 yr, Z in the range 0.021-0.024, XH = 0.71-0.72, Y = 0.256 - 0.266 and mixing length parameter α = 1.8. We compare our results to those of previous studies. We contrast the phase matching technique to that using the ratios of small to large separations, showing that it avoids the problem of correlated errors in separation ratio fitting and of assigning radial order n to the modes.

  11. Predictions from a model of global psychophysics about differences between perceptual and physical matches.

    Science.gov (United States)

    Steingrimsson, Ragnar; Luce, R Duncan

    2012-11-01

    A well-known phenomenon is that "matched" successive signals do not result in physical identity. This phenomenon has mostly been studied in terms of how much the second of two signals varies from the first, which is called the time-order error (TOE). Here, theoretical predictions led us to study the more general question of how much the matching signal differs from the standard signal, independent of the position of the matching signal as the first or second in a presentation. This we call non-equal matches (NEM). Using Luce's (Psychological Review, 109, 520-532, 2002, Psychological Review, 111, 446-454, 2004, Psychological Review, 115, 601, 2008, Psychological Review, 119, 373-387, 2012) global psychophysical theory, we predicted NEM when an intensity z is perceived to be "1 times a standard signal x." The theory predicts two different types of individual behaviors for the NEM, and these predictions were evaluated and confirmed in an experiment. We showed that the traditional definition of TOE precludes the observation, and thus the study, of the NEM phenomenon, and that the NEM effect is substantial enough to alter conclusions based on data that it affects. Furthermore, we demonstrated that the custom of averaging data over individuals clearly leads to quite misleading results. An important parameter in this modeling is a reference point that plays a central role in creating variability in the data, so that the key to obtaining regular data from respondents is to stabilize the reference point.

  12. Matching Matters!

    CERN Document Server

    Freitas, Ayres; Plehn, Tilman

    2016-01-01

    Effective Lagrangians are a useful tool for a data-driven approach to physics beyond the Standard Model at the LHC. However, for the new physics scales accessible at the LHC, the effective operator expansion is only relatively slowly converging at best. For tree-level processes, it has been found that the agreement between the effective Lagrangian and a range of UV-complete models depends sensitively on the appropriate definition of the matching. We extend this analysis to the one-loop level, which is relevant for electroweak precision data and Higgs decay to photons. We show that near the scale of electroweak symmetry breaking the validity of the effective theory description can be systematically improved through an appropriate matching procedure. In particular, we find a significant increase in accuracy when including suitable terms suppressed by the Higgs vacuum expectation value in the matching.

  13. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures ‐ such as ligament strains ‐ are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury‐producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  14. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    Directory of Open Access Journals (Sweden)

    Eva Lykkegaard

    2016-04-01

    Full Text Available Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics. Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did.

  15. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    Science.gov (United States)

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  16. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  17. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  18. Using multi-matching system based on a simplified deformable model of the human iris for iris recognition

    Institute of Scientific and Technical Information of China (English)

    MING Xing; XU Tao; WANG Zheng-xuan

    2004-01-01

    A new method for iris recognition using a multi-matching system based on a simplified deformable model of the human iris was proposed. The method defined iris feature points and formed the feature space based on a wavelet transform. In the matching stage it worked in a crude manner. Driven by a simplified deformable iris model, the crude matching was refined. By means of such multi-matching system, the task of iris recognition was accomplished. This process can preserve the elastic deformation between an input iris image and a template and improve precision for iris recognition. The experimental results indicate the validity of this method.

  19. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  20. A mode matching method for modeling dissipative silencers lined with poroelastic materials and containing mean flow.

    Science.gov (United States)

    Nennig, Benoit; Perrey-Debain, Emmanuel; Ben Tahar, Mabrouk

    2010-12-01

    A mode matching method for predicting the transmission loss of a cylindrical shaped dissipative silencer partially filled with a poroelastic foam is developed. The model takes into account the solid phase elasticity of the sound-absorbing material, the mounting conditions of the foam, and the presence of a uniform mean flow in the central airway. The novelty of the proposed approach lies in the fact that guided modes of the silencer have a composite nature containing both compressional and shear waves as opposed to classical mode matching methods in which only acoustic pressure waves are present. Results presented demonstrate good agreement with finite element calculations provided a sufficient number of modes are retained. In practice, it is found that the time for computing the transmission loss over a large frequency range takes a few minutes on a personal computer. This makes the present method a reliable tool for tackling dissipative silencers lined with poroelastic materials.

  1. Neural-Based Pattern Matching for Selection of Biophysical Model Meteorological Forcings

    Science.gov (United States)

    Coleman, A. M.; Wigmosta, M. S.; Li, H.; Venteris, E. R.; Skaggs, R. J.

    2011-12-01

    matching method using neural-network based Self-Organizing Maps (SOM) and GIS-based spatial modeling. This method pattern matches long-term mean monthly meteorology at an individual site to a series of CLIGEN stations within a user-defined proximal distance. The time-series data signatures of the selected stations are competed against one another using a SOM-generated similarity metric to determine the closest pattern match to the spatially distributed PRISM meteorology at the site of interest. This method overcomes issues with topographic dispersion of meteorology stations and existence of microclimates where the nearest meteorology station may not be the most representative.

  2. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  3. Fast history matching of time-lapse seismic and production data for high resolution models

    Science.gov (United States)

    Jimenez Arismendi, Eduardo Antonio

    Integrated reservoir modeling has become an important part of day-to-day decision analysis in oil and gas management practices. A very attractive and promising technology is the use of time-lapse or 4D seismic as an essential component in subsurface modeling. Today, 4D seismic is enabling oil companies to optimize production and increase recovery through monitoring fluid movements throughout the reservoir. 4D seismic advances are also being driven by an increased need by the petroleum engineering community to become more quantitative and accurate in our ability to monitor reservoir processes. Qualitative interpretations of time-lapse anomalies are being replaced by quantitative inversions of 4D seismic data to produce accurate maps of fluid saturations, pore pressure, temperature, among others. Within all steps involved in this subsurface modeling process, the most demanding one is integrating the geologic model with dynamic field data, including 4Dseismic when available. The validation of the geologic model with observed dynamic data is accomplished through a "history matching" (HM) process typically carried out with well-based measurements. Due to low resolution of production data, the validation process is severely limited in its reservoir areal coverage, compromising the quality of the model and any subsequent predictive exercise. This research will aim to provide a novel history matching approach that can use information from high-resolution seismic data to supplement the areally sparse production data. The proposed approach will utilize streamline-derived sensitivities as means of relating the forward model performance with the prior geologic model. The essential ideas underlying this approach are similar to those used for high-frequency approximations in seismic wave propagation. In both cases, this leads to solutions that are defined along "streamlines" (fluid flow), or "rays" (seismic wave propagation). Synthetic and field data examples will be used

  4. Frequency-domain Model Matching PID Controller Design for Aero-engine

    Science.gov (United States)

    Liu, Nan; Huang, Jinquan; Lu, Feng

    2014-12-01

    The nonlinear model of aero-engine was linearized at multiple operation points by using frequency response method. The validation results indicate high accuracy of static and dynamic characteristics of the linear models. The improved PID tuning method of frequency-domain model matching was proposed with the system stability condition considered. The proposed method was applied to the design of PID controller of the high pressure rotor speed control in the flight envelope, and the control effects were evaluated by the nonlinear model. Simulation results show that the system had quick dynamic response with zero overshoot and zero steadystate error. Furthermore, a PID-fuzzy switching control scheme for aero-engine was designed, and the fuzzy switching system stability was proved. Simulations were studied to validate the applicability of the multiple PIDs fuzzy switching controller for aero-engine with wide range dynamics.

  5. Truecluster matching

    CERN Document Server

    Oehlschlägel, Jens

    2007-01-01

    Cluster matching by permuting cluster labels is important in many clustering contexts such as cluster validation and cluster ensemble techniques. The classic approach is to minimize the euclidean distance between two cluster solutions which induces inappropriate stability in certain settings. Therefore, we present the truematch algorithm that introduces two improvements best explained in the crisp case. First, instead of maximizing the trace of the cluster crosstable, we propose to maximize a chi-square transformation of this crosstable. Thus, the trace will not be dominated by the cells with the largest counts but by the cells with the most non-random observations, taking into account the marginals. Second, we suggest a probabilistic component in order to break ties and to make the matching algorithm truly random on random data. The truematch algorithm is designed as a building block of the truecluster framework and scales in polynomial time. First simulation results confirm that the truematch algorithm give...

  6. mr: A C++ library for the matching and running of the Standard Model parameters

    Science.gov (United States)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  7. mr: a C++ library for the matching and running of the Standard Model parameters

    CERN Document Server

    Kniehl, Bernd A; Veretin, Oleg L

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the $\\overline{\\mathrm{MS}}$ renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  8. SAR Automatic Target Recognition Based on Numerical Scattering Simulation and Model-based Matching

    Directory of Open Access Journals (Sweden)

    Zhou Yu

    2015-12-01

    Full Text Available This study proposes a model-based Synthetic Aperture Radar (SAR automatic target recognition algorithm. Scattering is computed offline using the laboratory-developed Bidirectional Analytic Ray Tracing software and the same system parameter settings as the Moving and Stationary Target Acquisition and Recognition (MSTAR datasets. SAR images are then created by simulated electromagnetic scattering data. Shape features are extracted from the measured and simulated images, and then, matches are searched. The algorithm is verified using three types of targets from MSTAR data and simulated SAR images, and it is shown that the proposed approach is fast and easy to implement with high accuracy.

  9. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Nsiri Benayad

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  10. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  11. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  12. Is There a Purchase Limit on Regional Growth? A Quasi-experimental Evaluation of Investment Grants Using Matching Techniques

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel; Paloyo, Alfredo R.; Alecke, Björn

    2015-01-01

    growth associated with a maximum subsidy level beyond which financial support does not generate further labor-productivity growth. In other words, there is a “purchase limit” on regional growth. Although the matching approach is very appealing due to its methodological rigor and didactical clarity...

  13. On improving analytical models of cosmic reionization for matching numerical simulation

    CERN Document Server

    Kaurov, Alexander A

    2015-01-01

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are...

  14. On the transferability of three water models developed by adaptive force matching

    CERN Document Server

    Hu, Hongyi; Wang, Feng

    2015-01-01

    Water is perhaps the most simulated liquid. Recently three water models have been developed following the adaptive force matching (AFM) method that provides excellent predictions of water properties with only electronic structure information as a reference. Compared to many other electronic structure based force fields that rely on fairly sophisticated energy expressions, the AFM water models use point-charge based energy expressions that are supported by most popular molecular dynamics packages. An outstanding question regarding simple force fields is whether such force fields provide reasonable transferability outside of their conditions of parameterization. A survey of three AFM water models, B3LYPD-4F, BLYPSP-4F, and WAIL are provided for simulations under conditions ranging from the melting point up to the critical point. By including ice-Ih configurations in the training set, the WAIL potential predicts the melting temperate, TM, of ice-Ih correctly. Without training for ice, BLYPSP-4F underestimates TM...

  15. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  16. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  17. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  18. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  19. Matching theory

    CERN Document Server

    Plummer, MD

    1986-01-01

    This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.

  20. Pattern matching

    NARCIS (Netherlands)

    A. Hak (Tony); J. Dul (Jan)

    2009-01-01

    textabstractPattern matching is comparing two patterns in order to determine whether they match (i.e., that they are the same) or do not match (i.e., that they differ). Pattern matching is the core procedure of theory-testing with cases. Testing consists of matching an “observed pattern” (a pattern

  1. IMC-PID design based on model matching approach and closed-loop shaping.

    Science.gov (United States)

    Jin, Qi B; Liu, Q

    2014-03-01

    Motivated by the limitations of the conventional internal model control (IMC), this communication addresses the design of IMC-based PID in terms of the robust performance of the control system. The IMC controller form is obtained by solving an H-infinity problem based on the model matching approach, and the parameters are determined by closed-loop shaping. The shaping of the closed-loop transfer function is considered both for the set-point tracking and for the load disturbance rejection. The design procedure is formulated as a multi-objective optimization problem which is solved by a specific optimization algorithm. A nice feature of this design method is that it permits a clear tradeoff between robustness and performance. Simulation examples show that the proposed method is effective and has a wide applicability.

  2. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    Science.gov (United States)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  3. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  4. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  5. The influence of geological data on the reservoir modelling and history matching process

    NARCIS (Netherlands)

    De Jager, G.

    2012-01-01

    For efficient production of hydrocarbons from subsurface reservoirs it is important to understand the spatial properties of the reservoir. As there is almost always too little information on the reservoir to build a representative model directly, other techniques have been developed for generating r

  6. Hard Copy to Digital Transfer: 3D Models that Match 2D Maps

    Science.gov (United States)

    Kellie, Andrew C.

    2011-01-01

    This research describes technical drawing techniques applied in a project involving digitizing of existing hard copy subsurface mapping for the preparation of three dimensional graphic and mathematical models. The intent of this research was to identify work flows that would support the project, ensure the accuracy of the digital data obtained,…

  7. Hard Copy to Digital Transfer: 3D Models that Match 2D Maps

    Science.gov (United States)

    Kellie, Andrew C.

    2011-01-01

    This research describes technical drawing techniques applied in a project involving digitizing of existing hard copy subsurface mapping for the preparation of three dimensional graphic and mathematical models. The intent of this research was to identify work flows that would support the project, ensure the accuracy of the digital data obtained,…

  8. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Jianbing, E-mail: yijianbing8@163.com [College of Information Engineering, Shenzhen University, Shenzhen, Guangdong 518000, China and College of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, Jiangxi 341000 (China); Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn [College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518000 (China); Chen, Guoliang, E-mail: glchen@szu.edu.cn [National High Performance Computing Center at Shenzhen, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518000 (China)

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  9. One-Match and All-Match Categories for Keywords Matching in Chatbot

    Directory of Open Access Journals (Sweden)

    Abbas S. Lokman

    2010-01-01

    Full Text Available Problem statement: Artificial intelligence chatbot is a technology that makes interactions between men and machines using natural language possible. From literature of chatbots keywords/pattern matching techniques, potential issues for improvement had been discovered. The discovered issues are in the context of keywords arrangement for matching precedence and keywords variety for matching flexibility. Approach: Combining previous techniques/mechanisms with some additional adjustment, new technique to be used for keywords matching process is proposed. Using newly developed chatbot named ViDi (abbreviation for Virtual Diabetes physician which is a chatbot for diabetes education activity as a testing medium, the proposed technique named One-Match and All-Match Categories (OMAMC is being used to test the creation of possible keywords surrounding one sample input sentence. The result for possible keywords created by this technique then being compared to possible keywords created by previous chatbots techniques surrounding the same sample sentence in matching precedence and matching flexibility context. Results: OMAMC technique is found to be improving previous matching techniques in matching precedence and flexibility context. This improvement is seen to be useful for shortening matching time and widening matching flexibility within the chatbots keywords matching process. Conclusion: OMAMC for keywords matching in chatbot is shown to be an improvement over previous techniques in the context of keywords arrangement for matching precedence and keywords variety for matching flexibility.

  10. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  11. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  12. Sulphur simulations for East Asia using the MATCH model with meteorological data from ECMWF

    Energy Technology Data Exchange (ETDEWEB)

    Engardt, Magnuz

    2000-03-01

    As part of a model intercomparison exercise, with participants from a number of Asian, European and American institutes, sulphur transport and conversion calculations were conducted over an East Asian domain for 2 different months in 1993. All participants used the same emission inventory and simulated concentration and deposition at a number of prescribed geographic locations. The participants were asked to run their respective model both with standard parameters, and with a set of given parameters, in order to examine the different behaviour of the models. The study included comparison with measured data and model-to-model intercomparisons, notably source-receptor relationships. We hereby describe the MATCH model, used in the study, and report some typical results. We find that although the standard and the prescribed set of model parameters differed significantly in terms of sulphur conversion and wet scavenging rate, the resulting change in atmospheric concentrations and surface depositions only change marginally. We show that it is often more critical to choose a representative gridbox value than selecting a parameter from the suite available. The modelled, near-surface, atmospheric concentration of sulphur in eastern China is typically 5- 10 {mu}g S m{sup -3}, with large areas exceeding 20 {mu}g S m{sup -3}. In southern Japan the values range from 2-5 {mu}g S m{sup -3} . Atmospheric SO{sub 2} dominates over sulphate near the emission regions while sulphate concentrations are higher over e.g. the western Pacific. The sulphur deposition exceeds several g sulphur m{sup -2} year{sup -1} in large areas of China. Southern Japan receives 03-1 g S m{sup -2} year{sup -1}. In January, the total wet deposition roughly equals the dry deposition, in May - when it rains more in the domain - total wet deposition is ca. 50% larger than total dry deposition.

  13. Hartle's model within the general theory of perturbative matchings: the change in mass

    CERN Document Server

    Reina, Borja

    2014-01-01

    Hartle's model provides the most widely used analytic framework to describe isolated compact bodies rotating slowly in equilibrium up to second order in perturbations in the context of General Relativity. Apart from some explicit assumptions, there are some implicit, like the "continuity" of the functions in the perturbed metric across the surface of the body. In this work we sketch the basics for the analysis of the second order problem using the modern theory of perturbed matchings. In particular, the result we present is that when the energy density of the fluid in the static configuration does not vanish at the boundary, one of the functions of the second order perturbation in the setting of the original work by Hartle is not continuous. This discrepancy affects the calculation of the change in mass of the rotating star with respect to the static configuration needed to keep the central energy density unchanged.

  14. Development of a voxel-matching technique for substantial reduction of subtraction artifacts in temporal subtraction images obtained from thoracic MDCT.

    Science.gov (United States)

    Itai, Yoshinori; Kim, Hyoungseop; Ishikawa, Seiji; Katsuragawa, Shigehiko; Doi, Kunio

    2010-02-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes (such as formation of new lesions and changes in existing abnormalities) on medical images by removing most of the normal structures. However, subtraction artifacts are commonly included in temporal subtraction images obtained from thoracic computed tomography and thus tend to reduce its effectiveness in the detection of pulmonary nodules. In this study, we developed a new method for substantially removing the artifacts on temporal subtraction images of lungs obtained from multiple-detector computed tomography (MDCT) by using a voxel-matching technique. Our new method was examined on 20 clinical cases with MDCT images. With this technique, the voxel value in a warped (or nonwarped) previous image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. With the voxel-matching technique, the correspondence not only between the structures but also between the voxel values in the current and the previous images is determined. To evaluate the usefulness of the voxel-matching technique for removal of subtraction artifacts, the magnitude of artifacts remaining in the temporal subtraction images was examined by use of the full width at half maximum and the sum of a histogram of voxel values, which may indicate the average contrast and the total amount, respectively, of subtraction artifacts. With our new method, subtraction artifacts due to normal structures such as blood vessels were substantially removed on temporal subtraction images. This computerized method can enhance lung nodules on chest MDCT images without disturbing misregistration artifacts.

  15. Determination of Sub-Trace Sc, Y and Ln in Carbonate by ICP-MS with Inter-Element Matrix-Matched Technique

    Institute of Scientific and Technical Information of China (English)

    胡圣虹; 胡兆初; 刘勇胜; 林守麟; 高山

    2003-01-01

    A simple method for the determination of Sc, Y and Ln in carbonate at sub-μg*g-1 levels by ICP-MS with inter-elements matrix-matched technique was developed. A series of matrix-matched standard solution were prepared by adopting the normalized concentration values, which were calculated the statistic average compositions of reference values of REEs in carbonate standard reference materials. The matrix effects of Ca and Mg on REEs were studied in detail and the results show that the matrix effect of Ca and Mg can be ignored when the dilution factors are more than 1000. The combination of 115In and 103Rh as internal standard was selected to compensate the drift of analytical signals. The method proposed was applied to the analysis of ultra-trace REEs in carbonate references materials GSR-6, GSR-12 and real samples.

  16. Pattern matching

    OpenAIRE

    Hak, Tony; Dul, Jan

    2009-01-01

    textabstractPattern matching is comparing two patterns in order to determine whether they match (i.e., that they are the same) or do not match (i.e., that they differ). Pattern matching is the core procedure of theory-testing with cases. Testing consists of matching an “observed pattern” (a pattern of measured values) with an “expected pattern” (a hypothesis), and deciding whether these patterns match (resulting in a confirmation of the hypothesis) or do not match (resulting in a disconfirmat...

  17. On Improving Analytical Models of Cosmic Reionization for Matching Numerical Simulation

    Science.gov (United States)

    Kaurov, Alexander A.

    2016-11-01

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.

  18. On Improving Analytical Models of Cosmic Reionization for Matching Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kaurov, Alexander A. [Univ. of Chicago, IL (United States)

    2016-01-01

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are particularly useful for CMB polarization and 21cm experiments, where large volumes are required to simulate the observed signal.

  19. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  20. Reduction of the ambiguity of karst aquifer modeling through pattern matching of groundwater flow and transport

    Science.gov (United States)

    Oehlmann, Sandra; Geyer, Tobias; Licha, Tobias; Sauter, Martin

    2014-05-01

    Distributive numerical simulations are an effective, process-based method for predicting groundwater resources and quality. They are based on conceptual hydrogeological models that characterize the properties of the catchment area and aquifer. Karst systems play an important role in water supply worldwide. Conceptual models are however difficult to build because of the highly developed heterogeneity of the systems. The geometry and properties of highly conductive karst conduits are generally unknown and difficult to characterize with field experiments. Due to these uncertainties numerical models of karst areas usually cannot simulate the hydraulic head distribution in the area, spring discharge and tracer breakthrough curves simultaneously on catchment scale. Especially in complex hydrogeological systems, this approach would reduce model ambiguity, which is prerequisite to predict groundwater resources and pollution risks. In this work, a distributive numerical groundwater flow and transport model was built for a highly heterogeneous karst aquifer in south-western Germany. For this aim, a solute transport interface for one-dimensional pipes was implemented in the software Comsol Multiphysics® and coupled to the standard three-dimensional solute transport interface for domains. The model was calibrated and hydraulic parameters could be obtained. The simulation was matched to the steady-state hydraulic head distribution in the model area, the spring discharge of several springs and the transport velocities of two tracer tests. Furthermore, other measured parameters such as hydraulic conductivity of the fissured matrix and the maximal karst conduit volume were available for model calibration. Parameter studies were performed for several karst conduit geometries to analyze their influence in a large-scale heterogeneous karst system. Results show that it is not only possible to derive a consistent flow and transport model for a 150 km2 karst area to be employed as a

  1. A knowledge based approach to matching human neurodegenerative disease and animal models

    Directory of Open Access Journals (Sweden)

    Maryann E Martone

    2013-05-01

    Full Text Available Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology and an associated Phenotype Knowledge Base using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework and qualities are drawn from the Phenotype and Trait Ontology. We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  2. A knowledge based approach to matching human neurodegenerative disease and animal models

    Science.gov (United States)

    Maynard, Sarah M.; Mungall, Christopher J.; Lewis, Suzanna E.; Imam, Fahim T.; Martone, Maryann E.

    2013-01-01

    Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology (NDPO) and an associated Phenotype Knowledge Base (PKB) using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework (NIF) and qualities are drawn from the Phenotype and Trait Ontology (PATO). We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  3. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  4. Parameter estimation and accuracy matching strategies for 2-D reactor models

    Science.gov (United States)

    Nowak, U.; Grah, A.; Schreier, M.

    2005-11-01

    The mathematical modelling of a special modular catalytic reactor kit leads to a system of partial differential equation in two space dimensions. As customary, this model contains uncertain physical parameters, which may be adapted to fit experimental data. To solve this nonlinear least-squares problem we apply a damped Gauss-Newton method. A method of lines approach is used to evaluate the associated model equations. By an a priori spatial discretization, a large DAE system is derived and integrated with an adaptive, linearly implicit extrapolation method. For sensitivity evaluation we apply an internal numerical differentiation technique, which reuses linear algebra information from the model integration. In order not to interfere with the control of the Gauss-Newton iteration these computations are done usually very accurately and, therefore, with substantial cost. To overcome this difficulty, we discuss several accuracy adaptation strategies, e.g., a master-slave mode. Finally, we present some numerical experiments.

  5. General solution to diagonal model matching control of multiple-output-delay systems and its applications in adaptive scheme

    Institute of Scientific and Technical Information of China (English)

    Yingmin Jia

    2009-01-01

    This paper mainly studies the model matching problem of multiple-output-delay systems in which the reference model is assigned to a diagonal transfer function matrix.A new model matching controller structure is first developed,and then,it is shown that the controller is feasible if and only if the sets of Diophantine equations have common solutions.The obtained controller allows a parametric representation,which shows that an adaptive scheme can be used to tolerate parameter variations in the plants.The resulting adaptive law can guarantee the global stability of the closed-loop systems and the convergence of the output error.

  6. Elastic Minutiae Matching by Means of Thin-Plate Spline Models

    NARCIS (Netherlands)

    Bazen, Asker M.; Gerez, Sabih H.

    2002-01-01

    This paper presents a novel minutiae matching method that deals with elastic distortions by normalizing the shape of the test fingerprint with respect to the template. The method first determines possible matching minutiae pairs by means of comparing local neighborhoods of the minutiae. Next a thin-

  7. Impedance Matching for Discrete, Periodic Media and Application to Two-Scale Wave Propagation Models

    Science.gov (United States)

    Thirunavukkarasu, Senganal

    This dissertation introduces the idea of an equivalent continuous medium (ECM) that has the same impedance as that of an unbounded discrete periodic medium. Contrary to existing knowledge, we constructively show that it is indeed possible to achieve perfect matching for periodic and discrete media. We present analytical results relating the propagation characteristics of periodic media and the corresponding ECM, leading to the development of numerical methods for wave propagation in these media. In this dissertation, we present the main idea of ECM and apply it, with mixed results, to seemingly different problems requiring effective numerical methods for modeling wave propagation in unbounded media. An immediate application of ECM is in developing absorbing boundary conditions (ABCs) for wave propagation in unbounded discrete media. Using the idea of ECM, and building on class of continuous ABCs called perfectly matched discrete layers (PMDL), we propose a new class of discrete ABCs called discrete PMDL and develop frequency domain formulations that are shown to be superior to continuous ABCs. Another application that is explored in this dissertation is the design of interface conditions for concurrent coupling of two-scale wave propagation models, e.g. Atomistic-to-Continuum (AtC) coupling. We propose a domain-decomposition (DD) approach and develop accurate interface conditions that are critical for the concurrent coupling of the two-scale models. It turns out that time-domain discrete ABCs are key to the the accuracy of these interface conditions. Since discrete PMDL is well-posed and accurate for the model problem, we build on it to propose an efficient and accurate interface condition for two-scale wave propagation models. Although many open problems remain with respect to implementation, we believe that the proposed DD based approach is a good first step towards achieving efficient coupling of two-scale wave propagation models. Time-domain discrete PMDL can

  8. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis

    Science.gov (United States)

    Jacobson, Seth A.; Marzari, Francesco; Rossi, Alessandro; Scheeres, Daniel J.

    2016-10-01

    From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis is consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. These conclusions rest on the asteroid rotation model of Marzari et al. ([2011]Icarus, 214, 622-631), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis, described in detail within, and the binary evolution model of Jacobson et al. ([2011a] Icarus, 214, 161-178) and Jacobson et al. ([2011b] The Astrophysical Journal Letters, 736, L19). Our complete asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the contact binary fraction. We find that in order for the model to best match observations, rotational fission produces high mass ratio (> 0.2) binary components with four to eight times the frequency as low mass ratio (<0.2) components, where the mass ratio is the mass of the secondary component divided by the mass of the primary component. This is consistent with post-rotational fission binary system mass ratio being drawn from either a flat or a positive and shallow distribution, since the high mass ratio bin is four times the size of the low mass ratio bin; this is in contrast to the observed steady-state binary mass ratio, which has a negative and steep distribution. This can be understood in the context of the BYORP-tidal equilibrium hypothesis, which predicts that low mass ratio binaries survive for a significantly

  9. A Nonlinear Multi-Scale Interaction Model for Atmospheric Blocking: The Eddy-Blocking Matching Mechanism

    Science.gov (United States)

    Luo, Dehai; Cha, Jing; Zhong, Linhao; Dai, Aiguo

    2014-05-01

    In this paper, a nonlinear multi-scale interaction (NMI) model is used to propose an eddy-blocking matching (EBM) mechanism to account for how synoptic eddies reinforce or suppress a blocking flow. It is shown that the spatial structure of the eddy vorticity forcing (EVF) arising from upstream synoptic eddies determines whether an incipient block can grow into a meandering blocking flow through its interaction with the transient synoptic eddies from the west. Under certain conditions, the EVF exhibits a low-frequency oscillation on timescales of 2-3 weeks. During the EVF phase with a negative-over- positive dipole structure, a blocking event can be resonantly excited through the transport of eddy energy into the incipient block by the EVF. As the EVF changes into an opposite phase, the blocking decays. The NMI model produces life cycles of blocking events that resemble observations. Moreover, it is shown that the eddy north-south straining is a response of the eddies to a dipole- or Ω-type block. In our model, as in observations, two synoptic anticyclones (cyclones) can attract and merge with one another as the blocking intensifies, but only when the feedback of the blocking on the eddies is included. Thus, we attribute the eddy straining and associated vortex interaction to the feedback of the intensified blocking on synoptic eddies. The results illustrate the concomitant nature of the eddy deformation, whose role as a PV source for the blocking flow becomes important only during the mature stage of a block. Our EBM mechanism suggests that an incipient block flow is amplified (or suppressed) under certain conditions by the EVF coming from the upstream of the blocking region.

  10. Modeling the insect mushroom bodies: application to a delayed match-to-sample task.

    Science.gov (United States)

    Arena, Paolo; Patané, Luca; Stornanti, Vincenzo; Termini, Pietro Savio; Zäpf, Bianca; Strauss, Roland

    2013-05-01

    Despite their small brains, insects show advanced capabilities in learning and task solving. Flies, honeybees and ants are becoming a reference point in neuroscience and a main source of inspiration for autonomous robot design issues and control algorithms. In particular, honeybees demonstrate to be able to autonomously abstract complex associations and apply them in tasks involving different sensory modalities within the insect brain. Mushroom Bodies (MBs) are worthy of primary attention for understanding memory and learning functions in insects. In fact, even if their main role regards olfactory conditioning, they are involved in many behavioral achievements and learning capabilities, as has been shown in honeybees and flies. Owing to the many neurogenetic tools, the fruit fly Drosophila became a source of information for the neuroarchitecture and biochemistry of the MBs, although the MBs of flies are by far simpler in organization than their honeybee orthologs. Electrophysiological studies, in turn, became available on the MBs of locusts and honeybees. In this paper a novel bio-inspired neural architecture is presented, which represents a generalized insect MB with the basic features taken from fruit fly neuroanatomy. By mimicking a number of different MB functions and architecture, we can replace and improve formerly used artificial neural networks. The model is a multi-layer spiking neural network where key elements of the insect brain, the antennal lobes, the lateral horn region, the MBs, and their mutual interactions are modeled. In particular, the model is based on the role of parts of the MBs named MB-lobes, where interesting processing mechanisms arise on the basis of spatio-temporal pattern formation. The introduced network is able to model learning mechanisms like olfactory conditioning seen in honeybees and flies and was found able also to perform more complex and abstract associations, like the delayed matching-to-sample tasks known only from

  11. Detection and alignment of 3D domain swapping proteins using angle-distance image-based secondary structural matching techniques.

    Directory of Open Access Journals (Sweden)

    Chia-Han Chu

    Full Text Available This work presents a novel detection method for three-dimensional domain swapping (DS, a mechanism for forming protein quaternary structures that can be visualized as if monomers had "opened" their "closed" structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D image transformations of secondary structural elements (SSEs, specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its

  12. Detection and alignment of 3D domain swapping proteins using angle-distance image-based secondary structural matching techniques.

    Science.gov (United States)

    Chu, Chia-Han; Lo, Wei-Cheng; Wang, Hsin-Wei; Hsu, Yen-Chu; Hwang, Jenn-Kang; Lyu, Ping-Chiang; Pai, Tun-Wen; Tang, Chuan Yi

    2010-10-14

    This work presents a novel detection method for three-dimensional domain swapping (DS), a mechanism for forming protein quaternary structures that can be visualized as if monomers had "opened" their "closed" structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D) image transformations of secondary structural elements (SSEs), specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC) and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD) computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its hinge loop

  13. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  14. Efficiency of perfectly matched layers for seismic wave modeling in second-order viscoelastic equations

    Science.gov (United States)

    Ping, Ping; Zhang, Yu; Xu, Yixian; Chu, Risheng

    2016-12-01

    In order to improve the perfectly matched layer (PML) efficiency in viscoelastic media, we first propose a split multi-axial PML (M-PML) and an unsplit convolutional PML (C-PML) in the second-order viscoelastic wave equations with the displacement as the only unknown. The advantage of these formulations is that it is easy and efficient to revise the existing codes of the second-order spectral element method (SEM) or finite-element method (FEM) with absorbing boundaries in a uniform equation, as well as more economical than the auxiliary differential equations PML. Three models which are easily suffered from late time instabilities are considered to validate our approaches. Through comparison the M-PML with C-PML efficiency of absorption and stability for long time simulation, it can be concluded that: (1) for an isotropic viscoelastic medium with high Poisson's ratio, the C-PML will be a sufficient choice for long time simulation because of its weak reflections and superior stability; (2) unlike the M-PML with high-order damping profile, the M-PML with second-order damping profile loses its stability in long time simulation for an isotropic viscoelastic medium; (3) in an anisotropic viscoelastic medium, the C-PML suffers from instabilities, while the M-PML with second-order damping profile can be a better choice for its superior stability and more acceptable weak reflections than the M-PML with high-order damping profile. The comparative analysis of the developed methods offers meaningful significance for long time seismic wave modeling in second-order viscoelastic wave equations.

  15. Efficiency of perfectly matched layers for seismic wave modeling in second-order viscoelastic equations

    Science.gov (United States)

    Ping, Ping; Zhang, Yu; Xu, Yixian; Chu, Risheng

    2016-09-01

    In order to improve the perfectly matched layer (PML) efficiency in viscoelastic media, we firstly propose a split multi-axial PML (M-PML) and an unsplit convolutional PML (C-PML) in the second-order viscoelastic wave equations with the displacement as the only unknown. The advantage of these formulations is that it is easy and efficient to revise the existing codes of the second-order spectral element method (SEM) or finite element method (FEM) with absorbing boundaries in a uniform equation, as well as more economical than the auxiliary differential equations PML (ADEPML). Three models which are easily suffered from late time instabilities are considered to validate our approaches. Through comparison the M-PML with C-PML efficiency of absorption and stability for long time simulation, it can be concluded that: 1) For an isotropic viscoelastic medium with high Poisson's ratio, the C-PML will be a sufficient choice for long time simulation because of its weak reflections and superior stability; 2) Unlike the M-PML with high-order damping profile, the M-PML with 2nd-order damping profile loses its stability in long time simulation for an isotropic viscoelastic medium; 3) In an anisotropic viscoelastic medium, the C-PML suffers from instabilities, while the M-PML with 2nd-order damping profile can be a better choice for its superior stability and more acceptable weak reflections than the M-PML with high-order damping profile. The comparative analysis of the developed methods offers meaningful significance for long time seismic wave modeling in second-order viscoelastic wave equations.

  16. Appraisal, coping, emotion, and performance during elite fencing matches: a random coefficient regression model approach.

    Science.gov (United States)

    Doron, J; Martinent, G

    2016-06-23

    Understanding more about the stress process is important for the performance of athletes during stressful situations. Grounded in Lazarus's (1991, 1999, 2000) CMRT of emotion, this study tracked longitudinally the relationships between cognitive appraisal, coping, emotions, and performance in nine elite fencers across 14 international matches (representing 619 momentary assessments) using a naturalistic, video-assisted methodology. A series of hierarchical linear modeling analyses were conducted to: (a) explore the relationships between cognitive appraisals (challenge and threat), coping strategies (task- and disengagement oriented coping), emotions (positive and negative) and objective performance; (b) ascertain whether the relationship between appraisal and emotion was mediated by coping; and (c) examine whether the relationship between appraisal and objective performance was mediated by emotion and coping. The results of the random coefficient regression models showed: (a) positive relationships between challenge appraisal, task-oriented coping, positive emotions, and performance, as well as between threat appraisal, disengagement-oriented coping and negative emotions; (b) that disengagement-oriented coping partially mediated the relationship between threat and negative emotions, whereas task-oriented coping partially mediated the relationship between challenge and positive emotions; and (c) that disengagement-oriented coping mediated the relationship between threat and performance, whereas task-oriented coping and positive emotions partially mediated the relationship between challenge and performance. As a whole, this study furthered knowledge during sport performance situations of Lazarus's (1999) claim that these psychological constructs exist within a conceptual unit. Specifically, our findings indicated that the ways these constructs are inter-related influence objective performance within competitive settings.

  17. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  18. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine;

    proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...... information in probabilistic inverse problems. Unfortunately, when this strategy is applied with the multiple-point-based simulation algorithm SNESIM the reproducibility of training image patterns is violated. In this study we suggest to combine sequential simulation with the frequency matching method...

  19. ICT as technical change in the matching and production functions of a Pissarides-Dixit-Stiglitz model

    OpenAIRE

    Ziesemer, T.H.W.

    2001-01-01

    In this paper we integrate two workhorse models in economics: Themonopolistic competition model of Dixit and Stiglitz and the search unemploymentmodel of Pissarides. Information and communication technology (ICT) is interpretedas i) technical progress in the matching function of the Pissarides labour marketsearch model where it is increasing the probability of filling a vacancy, and ii) astechnical change in the production function of the Dixit-Stiglitz goods market modelwhere it is increasin...

  20. Infrared fixed point of the 12-fermion SU(3) gauge model based on 2-lattice Monte Carlo renomalization-group matching.

    Science.gov (United States)

    Hasenfratz, Anna

    2012-02-10

    I investigate an SU(3) gauge model with 12 fundamental fermions. The physically interesting region of this strongly coupled system can be influenced by an ultraviolet fixed point due to lattice artifacts. I suggest to use a gauge action with an additional negative adjoint plaquette term that lessens this problem. I also introduce a new analysis method for the 2-lattice matching Monte Carlo renormalization group technique that significantly reduces finite volume effects. The combination of these two improvements allows me to measure the bare step scaling function in a region of the gauge coupling where it is clearly negative, indicating a positive renormalization group β function and infrared conformality.

  1. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals.

    Science.gov (United States)

    Chandran K S, Subhash; Mishra, Ashutosh; Shirhatti, Vinay; Ray, Supratim

    2016-03-23

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided.

  2. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  3. Missing exposure data in stereotype regression model: application to matched case-control study with disease subclassification.

    Science.gov (United States)

    Ahn, Jaeil; Mukherjee, Bhramar; Gruber, Stephen B; Sinha, Samiran

    2011-06-01

    With advances in modern medicine and clinical diagnosis, case-control data with characterization of finer subtypes of cases are often available. In matched case-control studies, missingness in exposure values often leads to deletion of entire stratum, and thus entails a significant loss in information. When subtypes of cases are treated as categorical outcomes, the data are further stratified and deletion of observations becomes even more expensive in terms of precision of the category-specific odds-ratio parameters, especially using the multinomial logit model. The stereotype regression model for categorical responses lies intermediate between the proportional odds and the multinomial or baseline category logit model. The use of this class of models has been limited as the structure of the model implies certain inferential challenges with nonidentifiability and nonlinearity in the parameters. We illustrate how to handle missing data in matched case-control studies with finer disease subclassification within the cases under a stereotype regression model. We present both Monte Carlo based full Bayesian approach and expectation/conditional maximization algorithm for the estimation of model parameters in the presence of a completely general missingness mechanism. We illustrate our methods by using data from an ongoing matched case-control study of colorectal cancer. Simulation results are presented under various missing data mechanisms and departures from modeling assumptions.

  4. Early outcome in renal transplantation from large donors to small and size-matched recipients - a porcine experimental model

    DEFF Research Database (Denmark)

    Ravlo, Kristian; Chhoden, Tashi; Søndergaard, Peter

    2012-01-01

    Kidney transplantation from a large donor to a small recipient, as in pediatric transplantation, is associated with an increased risk of thrombosis and DGF. We established a porcine model for renal transplantation from an adult donor to a small or size-matched recipient with a high risk of DGF...... and studied GFR, RPP using MRI, and markers of kidney injury within 10 h after transplantation. After induction of BD, kidneys were removed from ∼63-kg donors and kept in cold storage for ∼22 h until transplanted into small (∼15 kg, n = 8) or size-matched (n = 8) recipients. A reduction in GFR was observed...

  5. A foundation for flow-based matching: using temporal logic and model checking

    DEFF Research Database (Denmark)

    Brunel, Julien Pierre Manuel; Doligez, Damien; Hansen, René Rydhof

    2009-01-01

    Reasoning about program control-flow paths is an important functionality of a number of recent program matching languages and associated searching and transformation tools. Temporal logic provides a well-defined means of expressing properties of control-flow paths in programs, and indeed...... an extension of the temporal logic CTL has been applied to the problem of specifying and verifying the transformations commonly performed by optimizing compilers. Nevertheless, in developing the Coccinelle program transformation tool for performing Linux collateral evolutions in systems code, we have found...... with variables and witnesses) that is a suitable basis for the semantics and implementation of the Coccinelle’s program matching language. Our extension to CTL includes existential quantification over program fragments, which allows metavariables in the program matching language to range over different values...

  6. A Foundation for Flow-Based Program Matching Using Temporal Logic and Model Checking

    DEFF Research Database (Denmark)

    Brunel, Julien Pierre Manuel; Doligez, Damien; Hansen, Rene Rydhof

    2008-01-01

    Reasoning about program control-flow paths is an important functionality of a number of recent program matching languages and associated searching and transformation tools. Temporal logic provides a well-defined means of expressing properties of control-flow paths in programs, and indeed...... an extension of the temporal logic CTL has been applied to the problem of specifying and verifying the transformations commonly performed by optimizing compilers. Nevertheless, in developing the Coccinelle program transformation tool for performing Linux collateral evolutions in systems code, we have found......-VW (CTL with variables and witnesses) that is a suitable basis for the semantics and implementation of the Coccinelle’s program matching language. Our extension to CTL includes existential quantification over program fragments, which allows metavariables in the program matching language to range over...

  7. Effectiveness of MIS technique as a treatment modality for open intra-articular calcaneal fractures: A prospective evaluation with matched closed fractures treated by conventional technique.

    Science.gov (United States)

    Dhillon, Mandeep Singh; Gahlot, Nitesh; Satyaprakash, Sambit; Kanojia, Rajendra Kumar

    2015-09-01

    Twenty-five displaced intra-articular calcaneal fractures in 21 patients, aged 15-55 years were included in this study. Sanders' type I fractures, severe crushing or partial amputation, were excluded from the study. Patients were divided into group 1 (open fractures treated by MIS), and group 2 (closed fractures treated by ORIF). Group 1 had 16 and group 2 had 9 cases. Seven of 25 fractures (28%) developed wound related issues postoperatively. One patient (11.1%) in group 2 had wound margin necrosis, while 6 patients (37.5%) in group 1 developed pin tract and/or wound infection. At 1-year follow-up, the mean MFS for group 1 was 79 and mean MFS for group 2 was 84.4 (66.67% were good). The AOFAS score for group 1 was 77.37 and for group 2 was 86.1. The Bohlers' angle was restored in 81.16% cases in group 1 and 88.8% in group 2, while Gissane angle was restored in 68.75% of group 1 cases and 77.79% of group 2 cases. This study shows that acceptable fracture reduction can be obtained and maintained by MIS technique and it can be used as the primary definitive treatment option in open calcaneal fractures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Matching seed to site by climate similarity: techniques to prioritize plant materials development and use in restoration

    Science.gov (United States)

    Doherty, Kyle; Butterfield, Bradley J.; Wood, Troy E.

    2017-01-01

    Land management agencies are increasing the use of native plant materials for vegetation treatments to restore ecosystem function and maintain natural ecological integrity. This shift toward the use of natives has highlighted a need to increase the diversity of materials available. A key problem is agreeing on how many, and which, new accessions should be developed. Here we describe new methods that address this problem. Our methods use climate data to calculate a climate similarity index between two points in a defined extent. This index can be used to predict relative performance of available accessions at a target site. In addition, the index can be used in combination with standard cluster analysis algorithms to quantify and maximize climate coverage (mean climate similarity), given a modeled range extent and a specified number of accessions. We demonstrate the utility of this latter feature by applying it to the extents of 11 western North American species with proven or potential use in restoration. First, a species-specific seed transfer map can be readily generated for a species by predicting performance for accessions currently available; this map can be readily updated to accommodate new accessions. Next, the increase in climate coverage achieved by adding successive accessions can be explored, yielding information that managers can use to balance ecological and economic considerations in determining how many accessions to develop. This approach identifies sampling sites, referred to as climate centers, which contribute unique, complementary, climate coverage to accessions on hand, thus providing explicit sampling guidance for both germplasm preservation and research. We examine how these and other features of our approach add to existing methods used to guide plant materials development and use. Finally, we discuss how these new methods provide a framework that could be used to coordinate native plant materials development, evaluation, and use across

  9. Super-resolution reconstruction and higher-degree function deformation model based matching for Chang’E-1 lunar images

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This article intends to solve the matching problem of 2C level lunar images by Chang’E-1(CE-1)lunar probe satellite.A line-scanner image matching method is proposed which represents deformation by the quadric function along the camera motion direction and bases on the deformation model for a relief terrain’s imaging on sensors of the satellite borne three-line scanner camera.A precise matching is carried out for the normal view,the frontward view,and the backward view images of the CE-1 by combining the proposed method with the standard correlation method.A super-resolution(SR)reconstruction algorithm based on the wavelet interpolation of non-uniformly sampled data is also adopted to realize SR reconstruction of CE-1 lunar images,which adds the recognizable targets and explores CE-1 lunar images to the full.

  10. Super-resolution reconstruction and higher-degree function deformation model based matching for Chang'E-1 lunar images

    Institute of Scientific and Technical Information of China (English)

    LI LiChun; YU QiFeng; YUAN Yun; SHANG Yang; LU HongWei; SUN XiangYi

    2009-01-01

    This article intends to solve the matching problem of 2C level lunar images by Chang'E-1(CE-1)lunar probe satellite.A line-scanner image matching method is proposed which represents deformation by the quadric function along the camera motion direction and bases on the deformation model for a relief terrain's imaging on sensors of the satellite borne three-line scanner camera.A precise matching is carried out for the normal view,the frontward view,and the backward view images of the CE-1 by combining the proposed method with the standard correlation method.A super-resolution(SR)reconatruction algorithm based on the wavelet interpolation of non-uniformly sampled data is also adopted to realize SR reconstruction of CE-1 lunar images,which adds the recognizable targets and explores CE-1 lunar images to the full.

  11. Modeled hydraulic redistribution by Helianthus annuus L. matches observed data only after model modification to include nighttime transpiration

    Science.gov (United States)

    Neumann, R. B.; Cardon, Z. G.; Rockwell, F. E.; Teshera-Levye, J.; Zwieniecki, M.; Holbrook, N. M.

    2013-12-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical, and ecological consequences of HR depend on the amount of redistributed water, while the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two eco-types of Helianthus annuus L. in split-pot experiments, we examined how well the widely used HR modeling formulation developed by Ryel et al. (2002) could match experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive nighttime transpiration, and though over the last decade it has become more widely recognized that nighttime transpiration occurs in multiple species and many ecosystems, the original Ryel et al. (2002) formulation does not include the effect of nighttime transpiration on HR. We developed and added a representation of nighttime transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and nighttime stomatal behavior changed, both influencing HR.

  12. Design of LNA at 5.8GHz with Cascode and Cascaded Techniques Using T-Matching Network for Wireless Applications

    Directory of Open Access Journals (Sweden)

    Abu Bakar Ibrahim

    2012-01-01

    Full Text Available This paper presents the design of low  noise amplifier with cascode and cascaded techniques using T-matching network applicable for IEEE 802.16 standards. The amplifier use FHX76LP Low Noise SuperHEMT FET. The design simulation process is using Advance Design System (ADS software. The cascode and cascaded low noise amplifier (LNA produced gain of 53.4dB and noise figure (NF of 1.2dB. The input reflection (S11 and output return loss (S22 are -24.3dB and -23.9dB respectively. The input sensitivity is compliant with the IEEE 802.16 standards.

  13. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  14. Analytical modelling of waveguide mode launchers for matched feed reflector systems

    DEFF Research Database (Denmark)

    Palvig, Michael Forum; Breinbjerg, Olav; Meincke, Peter

    2016-01-01

    Matched feed horns aim to cancel cross polarization generated in offset reflector systems. An analytical method for predicting the mode spectrum generated by inclusions in such horns, e.g. stubs and pins, is presented. The theory is based on the reciprocity theorem with the inclusions represented...

  15. Mesopic models : from brightness matching to visual performance in night-time driving: a review

    NARCIS (Netherlands)

    Eloholma, M.; Liesiö, M.; Halonen, L.; Walkey, H.; Goodman, T.; Alferdinck, J.W.A.M.; Frieding, A.; Schanda, J.; Bodrogi, P.; Várady, G.

    2005-01-01

    At present, suitable methods to evaluate the visual effectiveness of lighting products in the mesopic region are not available. The majority of spectral luminous efficiency functions obtained to date in the mesopic range have been acquired by heterochromatic brightness matching. However, the most re

  16. Matching with Commitments

    CERN Document Server

    Costello, Kevin; Tripathi, Pushkar

    2012-01-01

    We consider the following stochastic optimization problem first introduced by Chen et al. in \\cite{chen}. We are given a vertex set of a random graph where each possible edge is present with probability p_e. We do not know which edges are actually present unless we scan/probe an edge. However whenever we probe an edge and find it to be present, we are constrained to picking the edge and both its end points are deleted from the graph. We wish to find the maximum matching in this model. We compare our results against the optimal omniscient algorithm that knows the edges of the graph and present a 0.573 factor algorithm using a novel sampling technique. We also prove that no algorithm can attain a factor better than 0.898 in this model.

  17. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  18. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  19. Matching achievement contexts with implicit theories to maximize motivation after failure: a congruence model.

    Science.gov (United States)

    El-Alayli, Amani

    2006-12-01

    Previous research has shown that matching person variables with achievement contexts can produce the best motivational outcomes. The current study examines whether this is also true when matching entity and incremental beliefs with the appropriate motivational climate. Participants were led to believe that a personal attribute was fixed (entity belief) or malleable (incremental belief). After thinking that they failed a test that assessed the attribute, participants performed a second (related) task in a context that facilitated the pursuit of either performance or learning goals. Participants were expected to exhibit greater effort on the second task in the congruent conditions (entity belief plus performance goal climate and incremental belief plus learning goal climate) than in the incongruent conditions. These results were obtained, but only for participants who either valued competence on the attribute or had high achievement motivation. Results are discussed in terms of developing strategies for optimizing motivation in achievement settings.

  20. A holistic model for matching high-tech hearing aid features to elderly patients.

    Science.gov (United States)

    Johnson, C E; Danhauer, J L; Krishnamurti, S

    2000-12-01

    Successful hearing aid fittings using high-technology features for elderly patients require consideration of factors beyond results obtained from routine audiologic evaluations. A holistic hearing aid selection, fitting, and evaluation approach that considers patient characteristics from communication, physical, psychological, and social assessment domains is presented here along with a checklist and flowcharts for matching high-tech hearing aid features to older persons who are hearing aid candidates.

  1. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  2. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    Science.gov (United States)

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  3. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  4. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources

    Science.gov (United States)

    Sudhir, Dass; Bandyopadhyay, M.; Chakraborty, A.

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  5. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources

    Energy Technology Data Exchange (ETDEWEB)

    Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Bandyopadhyay, M.; Chakraborty, A. [ITER-India, Institute for Plasma Research, A-29 GIDC, Sec-25, Gandhinagar, 382016 Gujarat (India)

    2016-02-15

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  6. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources.

    Science.gov (United States)

    Sudhir, Dass; Bandyopadhyay, M; Chakraborty, A

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  7. Application of Convolution Perfectly Matched Layer in MRTD scattering model for non-spherical aerosol particles and its performance analysis

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Jiang, Zidong; Liu, Lei; Chen, Ming

    2017-10-01

    The performance of absorbing boundary condition (ABC) is an important factor influencing the simulation accuracy of MRTD (Multi-Resolution Time-Domain) scattering model for non-spherical aerosol particles. To this end, the Convolution Perfectly Matched Layer (CPML), an excellent ABC in FDTD scheme, is generalized and applied to the MRTD scattering model developed by our team. In this model, the time domain is discretized by exponential differential scheme, and the discretization of space domain is implemented by Galerkin principle. To evaluate the performance of CPML, its simulation results are compared with those of BPML (Berenger's Perfectly Matched Layer) and ADE-PML (Perfectly Matched Layer with Auxiliary Differential Equation) for spherical and non-spherical particles, and their simulation errors are analyzed as well. The simulation results show that, for scattering phase matrices, the performance of CPML is better than that of BPML; the computational accuracy of CPML is comparable to that of ADE-PML on the whole, but at scattering angles where phase matrix elements fluctuate sharply, the performance of CPML is slightly better than that of ADE-PML. After orientation averaging process, the differences among the results of different ABCs are reduced to some extent. It also can be found that ABCs have a much weaker influence on integral scattering parameters (such as extinction and absorption efficiencies) than scattering phase matrices, this phenomenon can be explained by the error averaging process in the numerical volume integration.

  8. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  9. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  10. Influence of Surgical Technique, Performance Status, and Peritonitis Exposure on Surgical Site Infection in Acute Complicated Diverticulitis: A Matched Case-Control Study.

    Science.gov (United States)

    Zonta, Sandro; De Martino, Michela; Podetta, Michele; Viganò, Jacopo; Dominioni, Tommaso; Picheo, Roberto; Cobianchi, Lorenzo; Alessiani, Mario; Dionigi, Paolo

    2015-10-01

    Acute generalized peritonitis secondary to complicated diverticulitis is a life-threatening condition; the standard treatment is surgery. Despite advances in peri-operative care, this condition is accompanied by a high peri-operative complication rate (22%-25%). No definitive evidence is available to recommend a preferred surgical technique in patients with Hinchey stage III/IV disease. A matched case-control study enrolling patients from four surgical units at Italian university hospital was planned to assess the most appropriate surgical treatment on the basis of patient performance status and peritonitis exposure, with the aim of minimizing the surgical site infection (SSI). A series of 1,175 patients undergoing surgery for Hinchey III/IV peritonitis in 2003-2013 were analyzed. Cases (n=145) were selected from among those patients who developed an SSI. control ratio was 1:3. Cases and control groups were matched by age, gender, body mass index, and Hinchey grade. We considered three surgical techniques: T1=Hartman's procedure; T2=sigmoid resection, anastomosis, and ileostomy; and T3=sigmoid resection and anastomosis. Six scoring systems were analyzed to assess performance status; subsequently, patients were divided into low, mild, and high risk (LR, MR, HR) according to the system producing the highest area under the curve. We classified peritonitis exposition as P1=24 h. Univariable and multivariable analyses were performed. The Apgar scoring system defined the risk groups according to performance status. Lowest SSI risk was expected when applying T3 in P1 (OR=0.22), P2 (OR=0.5) for LR and in P1 (OR=0.63) for MR; T2 in P2 (OR=0.5) in LR and in P1 (OR=0.61) in MR; T1 in P3 (OR=0.56) in LR; in P2 (OR=0.63) and P3 (OR=0.54) in MR patients, and in each P subgroup (OR=0.93;0.97;1.01) in HR. Pre-operative assessment based on Apgar scoring system integrated with peritonitis exposure in complicated diverticulitis may offer a ready-to-use tool for reducing SSI

  11. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  12. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters.

  13. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  14. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  15. Testing the transtheoretical model for fruit intake: comparing web-based tailored stage-matched and stage-mismatched feedback.

    Science.gov (United States)

    de Vet, Emely; de Nooijer, Jascha; de Vries, Nanne K; Brug, Johannes

    2008-04-01

    A match-mismatch test was conducted to test the transtheoretical model applied to fruit intake. Precontemplators and contemplators were randomly assigned to receive a web-based individualized precontemplation feedback (PCF), contemplation feedback (CF) or action feedback (AF) letter promoting fruit intake. Immediately and 1 week after reading this letter, post-test measures were obtained. Fruit intake increased significantly between pre- and post-test in contemplators, but not in precontemplators. No differences between the feedback conditions were found in fruit intake, stage progression, use or credibility of the feedback in precontemplators and contemplators. In precontemplators, also no differences between the conditions were found in personal relevance of the feedback. Contemplators, however, rated AF as more personally relevant than PCF or CF. To conclude, the present study failed to show superiority of stage-matched information in the promotion of fruit intake.

  16. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  17. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  18. A Phenomenological Model for the Intracluster Medium that matches X-ray and Sunyaev-Zel'dovich observations

    CERN Document Server

    Zandanel, Fabio

    2014-01-01

    Cosmological hydrodynamical simulations of galaxy clusters are still challenged to produce a model for the intracluster medium that matches all aspects of current X-ray and Sunyaev-Zel'dovich observations. To facilitate such comparisons with future simulations and to enable realistic cluster population studies for modeling e.g., non-thermal emission processes, we construct a phenomenological model for the intracluster medium that is based on a representative sample of observed X-ray clusters. We create a mock galaxy cluster catalog based on the large collisionless N-body simulation MultiDark, by assigning our gas density model to each dark matter cluster halo. Our clusters are classified as cool-core and non cool-core according to a dynamical disturbance parameter. We demonstrate that our gas model matches the various observed Sunyaev-Zel'dovich and X-ray scaling relations as well as the X-ray luminosity function, thus enabling to build a reliable mock catalog for present surveys and forecasts for future expe...

  19. Integration of the history matching process with the geostatistical modeling in petroleum reservoirs; Integracao do processo de ajuste de historico com a modelagem geoestatistica em reservatorios de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Maschio, Celio; Schiozer, Denis Jose [Universidade Estadual de Campinas (FEM/UNICAMP), SP (Brazil). Faculdade de Engenharia Mecanica], Emails: celio@dep.fem.unicamp.br, denis@dep.fem.unicamp.br; Vidal, Alexandre Campane [Universidade Estadual de Campinas (IG/UNICAMP), SP (Brazil). Inst. de Geociencias. Dept. de Geologia e Recursos Naturais], E-mail: vidal@ige.unicamp.br

    2008-03-15

    The production history matching process, by which the numerical model is calibrated in order to reproduce the observed field production, is normally carried out separately from the geological modeling. Generally, the construction of the geological model and the history matching process are performed by different teams, such is common uncoupling or a weak coupling between the two areas. This can lead, in the history matching step, inadequate changes in the geological model, resulting sometimes models geologically inconsistent. This work proposes integration between the geostatistical modeling and the history matching through the incorporation of geostatistical realizations into the assisted process. In this way, reservoir parameters such as rock-fluid interaction properties, as well as the images resulted from the realizations are considered in the history matching. In order to find the best parameters combination that adjusts the model to the observed data, an optimization routine based on genetic algorithm is used. The proposed methodology is applied to a synthetic realistic reservoir model. The history matching is carried out in the conventional manner and considering the geostatistical images as history parameters, such the two processes are posteriorly compared. The results show the feasibility and the advantages resulting of this process of integration between the history matching and geostatistical modeling. (author)

  20. An HLA matched donor! An HLA matched donor? What do you mean by: HLA matched donor?

    Science.gov (United States)

    van Rood, J J; Oudshoorn, M

    1998-07-01

    The term 'an HLA matched donor' is in general used without giving exact information on the level of resolution of the HLA typing. This can lead to misunderstandings. A proposal is formulated to agree on using six match categories according to the HLA typing technique used to indicate the level of confidence of the matching.

  1. Towards optimal packed string matching

    DEFF Research Database (Denmark)

    Ben-Kiki, Oren; Bille, Philip; Breslauer, Dany

    2014-01-01

    In the packed string matching problem, it is assumed that each machine word can accommodate up to α characters, thus an n-character string occupies n/α memory words.(a) We extend the Crochemore–Perrin constant-space O(n)-time string-matching algorithm to run in optimal O(n/α) time and even in real......-time, achieving a factor α speedup over traditional algorithms that examine each character individually. Our macro-level algorithm only uses the standard AC0 instructions of the word-RAM model (i.e. no integer multiplication) plus two specialized micro-level AC0 word-size packed-string instructions. The main word...... matching work.(b) We also consider the complexity of the packed string matching problem in the classical word-RAM model in the absence of the specialized micro-level instructions wssm and wslm. We propose micro-level algorithms for the theoretically efficient emulation using parallel algorithms techniques...

  2. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Matsunobu, Y; Shiotsuki, K [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka (Japan); Morishita, J [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, JP (Japan)

    2015-06-15

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body.

  3. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  4. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  5. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control sy...

  6. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  7. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  8. A Pilot Participant Observation Study of the Environment in a Program for Young Offenders from a Conceptual Level Matching Model Perspective.

    Science.gov (United States)

    Reitsma-Street, Marge

    1988-01-01

    Discusses conceptual clarification of dimensions in the treatment environment of a residential unit for young offenders. Examines ideas of structure, control, contemporaneous, and developmental matching from the Conceptual Level Matching Model. Describes specification of types, quality, and settings of staff-youth interactions. Addresses the…

  9. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis

    CERN Document Server

    Jacobson, Seth Andrew; Rossi, Alessandro; Scheeres, Daniel J

    2016-01-01

    From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis can be consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. The foundation of this model is the asteroid rotation model of Marzari et al. (2011), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis and the binary evolution model of Jacobson & Scheeres (2011). The asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the...

  10. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  11. Learning Graph Matching

    CERN Document Server

    Caetano, Tiberio S; Cheng, Li; Le, Quoc V; Smola, Alex J

    2008-01-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the `labels' are ma...

  12. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  13. Do projections from bioclimatic envelope models and climate change metrics match?

    DEFF Research Database (Denmark)

    Garcia, Raquel A.; Cabeza, Mar; Altwegg, Res

    2016-01-01

    for sub-Saharan Africa with ensembles of bioclimatic envelope models for 2723 species of amphibians, snakes, mammals and birds. For each taxonomic group, we performed three comparisons between the two approaches: (1) is projected change in local climatic suitability (models) greater in grid cells...

  14. Dynamic force matching: A method for constructing dynamical coarse-grained models with realistic time dependence

    Energy Technology Data Exchange (ETDEWEB)

    Davtyan, Aram; Dama, James F.; Voth, Gregory A. [Department of Chemistry, The James Franck Institute, Institute for Biophysical Dynamics, and Computation Institute, The University of Chicago, Chicago, Illinois 60637 (United States); Andersen, Hans C., E-mail: hca@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States)

    2015-04-21

    Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that

  15. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  16. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  17. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    DEFF Research Database (Denmark)

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan

    2002-01-01

    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales and market......The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...

  18. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    Energy Technology Data Exchange (ETDEWEB)

    Sardanyes, Josep [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain)], E-mail: josep.sardanes@upf.edu; Sole, Ricard V. [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain); Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)

    2008-01-21

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.

  19. Crystallography of Zr poisoning of Al-Ti-B grain refinement using edge-to-edge matching model

    Institute of Scientific and Technical Information of China (English)

    黄元春; 肖政兵; 刘宇

    2013-01-01

    The mechanism of zirconium poisoning on the grain-refining efficiency of an Al-Ti-B based grain refiner was studied. The experiment was conducted by melting Al-5Ti-1B and Al-3Zr master alloys together. The edge-to-edge matching model was used to investigate and compare the orientation relationships between the binary intermetallic compounds present in the Al-Ti-B-Zr system. The results show that the poisoning effect probably results from the combination of Al3 Zr with Al3 Ti and the decreased amount of Ti solute, for Al3 Ti particles have good crystallographic relationships with Al3 Zr. Totally six orientation relationships may present between them, while they play vital roles in grain refinement. TiB2 particles appear to remain unchanged because of a bit large misfit. Only one orientation relationship may present between them to prevent Al3 Zr phase from forming on the surface of TiB2, though TiB2 is agglomerated. The theoretical calculation agrees well with the experimental results. The edge-to-edge matching model is proved to be a useful tool for discovering the orientation relationships between phases.

  20. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  1. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  2. Modeling of inter-sample variation in flow cytometric data with the joint clustering and matching procedure.

    Science.gov (United States)

    Lee, Sharon X; McLachlan, Geoffrey J; Pyne, Saumyadipta

    2016-01-01

    We present an algorithm for modeling flow cytometry data in the presence of large inter-sample variation. Large-scale cytometry datasets often exhibit some within-class variation due to technical effects such as instrumental differences and variations in data acquisition, as well as subtle biological heterogeneity within the class of samples. Failure to account for such variations in the model may lead to inaccurate matching of populations across a batch of samples and poor performance in classification of unlabeled samples. In this paper, we describe the Joint Clustering and Matching (JCM) procedure for simultaneous segmentation and alignment of cell populations across multiple samples. Under the JCM framework, a multivariate mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample such that the components in the mixture model may correspond to the various populations of cells, which have similar expressions of markers (that is, clusters), in the composition of the sample. For each class of samples, an overall class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The construction of a parametric template for each class allows for direct quantification of the differences between the template and each sample, and also between each pair of samples, both within or between classes. The classification of a new unclassified sample is then undertaken by assigning the unclassified sample to the class that minimizes the distance between its fitted mixture density and each class density as provided by the class templates. For illustration, we use a symmetric form of the Kullback-Leibler divergence as a distance measure between two densities, but other distance measures can also be applied. We show and demonstrate on four real datasets how the JCM procedure can be used to carry out the tasks of automated clustering and alignment of cell

  3. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  4. Matching a surface complexation model with ab initio molecular dynamics: montmorillonite case

    Energy Technology Data Exchange (ETDEWEB)

    Kulik, D.A.; Churakov, S.V. [Paul Scherrer Institute, Nuclear Energy and Safety Dpt., Lab. for Waste Management, CH-5232 Villigen PSI (Switzerland)

    2005-07-01

    Speciation modelling of sorption on mineral-water interfaces is performed with help of surface complexation models (SCM), suitable for diluted suspensions that seem to reach adsorption equilibrium within laboratory times. Electrostatic SCMs need several input parameters even for a relatively simple oxide mineral surface. Moreover, the electrolyte ion adsorption constants in triple layer (TL) or basic Stern (BS) models depend on the inner layer capacitance density Cl, but clear physical understanding of this parameter is missing so far. SCMs can fit acidimetric or metal titration data well at quite different combinations of input parameters, and this fact casts doubt on any interpretation of fitted parameter values in terms of microscopic physicochemical mechanisms. The problem is even deeper in SCMs for clay minerals like montmorillonite having at least two surface types: the edges exposing different (aluminol and silanol) functional groups, and the basal siloxane planes with permanent charge and ion exchange. A feasible way to overcome the caveat of SCMs is seen nowadays in relying on crystallographic data and ab initio calculations to restrict the EDL setup, species stoichiometries, and input parameter values when constructing the adsorption model. The aim of this contribution is to discuss how recent advances in sample surface characterization an d in quantum-chemistry calculations for pyrophyllite can help in putting together a multi-site-surface electrostatic SCM for montmorillonite implemented in GEM approach. The quality of macroscopic model fits is checked against the titration data. (authors)

  5. Optimal Packed String Matching

    DEFF Research Database (Denmark)

    Ben-Kiki, Oren; Bille, Philip; Breslauer, Dany

    2011-01-01

    In the packed string matching problem, each machine word accommodates – characters, thus an n-character text occupies n/– memory words. We extend the Crochemore-Perrin constantspace O(n)-time string matching algorithm to run in optimal O(n/–) time and even in real-time, achieving a factor – speedup...... over traditional algorithms that examine each character individually. Our solution can be efficiently implemented, unlike prior theoretical packed string matching work. We adapt the standard RAM model and only use its AC0 instructions (i.e., no multiplication) plus two specialized AC0 packed string...

  6. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  7. L-Tree Match: A New Data Extraction Model and Algorithm for Huge Text Stream with Noises

    Institute of Scientific and Technical Information of China (English)

    Xu-Bin Deng; Yang-Yong Zhu

    2005-01-01

    In this paper, a new method, named as L-tree match, is presented for extracting data from complex data sources. Firstly, based on data extraction logic presented in this work, a new data extraction model is constructed in which model components are structurally correlated via a generalized template. Secondly, a database-populating mechanism is built, along with some object-manipulating operations needed for flexible database design, to support data extraction from huge text stream. Thirdly, top-down and bottom-up strategies are combined to design a new extraction algorithm that can extract data from data sources with optional, unordered, nested, and/or noisy components. Lastly, this method is applied to extract accurate data from biological documents amounting to 100GB for the first online integrated biological data warehouse of China.

  8. Model and Simulation of a Tunable Birefringent Fiber Using Capillaries Filled with Liquid Ethanol for Magnetic Quasiphase Matching In-Fiber Isolator

    Directory of Open Access Journals (Sweden)

    Clint Zeringue

    2010-01-01

    Full Text Available A technique to tune a magnetic quasi-phase matching in-fiber isolator through the application of stress induced by two mutually orthogonal capillary tubes filled with liquid ethanol is investigated numerically. The results show that it is possible to “tune” the birefringence in these fibers over a limited range depending on the temperature at which the ethanol is loaded into the capillaries. Over this tuning range, the thermal sensitivity of the birefringence is an order-of-magnitude lower than conventional fibers, making this technique well suited for magnetic quasi-phase matching.

  9. Calibration of transient groundwater models using time series analysis and moment matching

    NARCIS (Netherlands)

    Bakker, M.; Maas, K.; Von Asmuth, J.R.

    2008-01-01

    A comprehensive and efficient approach is presented for the calibration of transient groundwater models. The approach starts with the time series analysis of the measured heads in observation wells using all active stresses as input series, which may include rainfall, evaporation, surface water leve

  10. Covariate balance assessment, model selection and bias in propensity score matching: A simulation study

    NARCIS (Netherlands)

    Ali, Sanni; Groenwold, Rolf H.H.; Belitser, S.; Roes, Kit C.B.; Hoes, Arno W.; De Boer, Anthonius; Klungel, Olaf H.

    2015-01-01

    Background: In building propensity score (PS) model, inclusion of interaction/square terms in addition to the main terms and the use of balance measures has been suggested. However, the impact of assessing balance of several sets of covariates and their interactions/squares on bias/precision is not

  11. A perfect match of MSSM-like orbifold and resolution models via anomalies

    CERN Document Server

    Blaszczyk, Michael; Nilles, Hans Peter; Ruehle, Fabian

    2011-01-01

    Compactification of the heterotic string on toroidal orbifolds is a promising set-up for the construction of realistic unified models of particle physics. The target space dynamics of such models, however, drives them slightly away from the orbifold point in moduli space. This resolves curvature singularities, but makes the string computations very difficult. On these smooth manifolds we have to rely on an effective supergravity approximation in the large volume limit. By comparing an orbifold example with its blow-up version, we try to transfer the computational power of the orbifold to the smooth manifold. Using local properties, we establish a perfect map of the the chiral spectra as well as the (local) anomalies of these models. A key element in this discussion is the Green-Schwarz anomaly polynomial. It allows us to identify those redefinitions of chiral fields and localized axions in the blow-up process which are relevant for the interactions (such as Yukawa-couplings) in the model on the smooth space.

  12. Benign childhood epilepsy with centrotemporal spikes and the multicomponent model of attention : A matched control study

    NARCIS (Netherlands)

    Cerminara, Caterina; D'Agati, Elisa; Lange, Klaus W.; Kaunzinger, Ivo; Tucha, Oliver; Parisi, Pasquale; Spalice, Alberto; Curatolo, Paolo

    2010-01-01

    Although the high risk of cognitive impairments in benign childhood epilepsy with centrotemporal spikes (BCECTS) is now well established, there is no clear definition of a uniform neurocognitive profile. This study was based on a neuropsychological model of attention that assessed various components

  13. Addressing diverse learner preferences and intelligences with emerging technologies: Matching models to online opportunities

    Directory of Open Access Journals (Sweden)

    Ke Zhang

    2009-03-01

    Full Text Available This paper critically reviews various learning preferences and human intelligence theories and models with a particular focus on the implications for online learning. It highlights a few key models, Gardner’s multiple intelligences, Fleming and Mills’ VARK model, Honey and Mumford’s Learning Styles, and Kolb’s Experiential Learning Model, and attempts to link them to trends and opportunities in online learning with emerging technologies. By intersecting such models with online technologies, it offers instructors and instructional designers across educational sectors and situations new ways to think about addressing diverse learner needs, backgrounds, and expectations. Learning technologies are important for effective teaching, as are theories and models and theories of learning. We argue that more immense power can be derived from connections between the theories, models and learning technologies. Résumé : Cet article passe en revue de manière critique les divers modèles et théories sur les préférences d’apprentissage et l’intelligence humaine, avec un accent particulier sur les implications qui en découlent pour l’apprentissage en ligne. L’article présente quelques-uns des principaux modèles (les intelligences multiples de Gardner, le modèle VAK de Fleming et Mills, les styles d’apprentissage de Honey et Mumford et le modèle d’apprentissage expérientiel de Kolb et tente de les relier à des tendances et occasions d’apprentissage en ligne qui utilisent les nouvelles technologies. En croisant ces modèles avec les technologies Web, les instructeurs et concepteurs pédagogiques dans les secteurs de l’éducation ou en situation éducationnelle se voient offrir de nouvelles façons de tenir compte des divers besoins, horizons et attentes des apprenants. Les technologies d’apprentissage sont importantes pour un enseignement efficace, tout comme les théories et les modèles d’apprentissage. Nous sommes d

  14. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  15. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  16. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  17. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  18. The Role of Serotype Interactions and Seasonality in Dengue Model Selection and Control: Insights from a Pattern Matching Approach.

    Science.gov (United States)

    Ten Bosch, Quirine A; Singh, Brajendra K; Hassan, Muhammad R A; Chadee, Dave D; Michael, Edwin

    2016-05-01

    The epidemiology of dengue fever is characterized by highly seasonal, multi-annual fluctuations, and the irregular circulation of its four serotypes. It is believed that this behaviour arises from the interplay between environmental drivers and serotype interactions. The exact mechanism, however, is uncertain. Constraining mathematical models to patterns characteristic to dengue epidemiology offers a means for detecting such mechanisms. Here, we used a pattern-oriented modelling (POM) strategy to fit and assess a range of dengue models, driven by combinations of temporary cross protective-immunity, cross-enhancement, and seasonal forcing, on their ability to capture the main characteristics of dengue dynamics. We show that all proposed models reproduce the observed dengue patterns across some part of the parameter space. Which model best supports the dengue dynamics is determined by the level of seasonal forcing. Further, when tertiary and quaternary infections are allowed, the inclusion of temporary cross-immunity alone is strongly supported, but the addition of cross-enhancement markedly reduces the parameter range at which dengue dynamics are produced, irrespective of the strength of seasonal forcing. The implication of these structural uncertainties on predicted vulnerability to control is also discussed. With ever expanding spread of dengue, greater understanding of dengue dynamics and control efforts (e.g. a near-future vaccine introduction) has become critically important. This study highlights the capacity of multi-level pattern-matching modelling approaches to offer an analytic tool for deeper insights into dengue epidemiology and control.

  19. The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies

    DEFF Research Database (Denmark)

    Scheike, Thomas; Martinussen, T; Zhang, MJ

    2011-01-01

    leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....

  20. D Model of AL Zubarah Fortress in Qatar - Terrestrial Laser Scanning VS. Dense Image Matching

    Science.gov (United States)

    Kersten, T.; Mechelke, K.; Maziull, L.

    2015-02-01

    In September 2011 the fortress Al Zubarah, built in 1938 as a typical Arabic fortress and restored in 1987 as a museum, was recorded by the HafenCity University Hamburg using terrestrial laser scanning with the IMAGER 5006h and digital photogrammetry for the Qatar Museum Authority within the framework of the Qatar Islamic Archaeology and Heritage Project. One goal of the object recording was to provide detailed 2D/3D documentation of the fortress. This was used to complete specific detailed restoration work in the recent years. From the registered laser scanning point clouds several cuttings and 2D plans were generated as well as a 3D surface model by triangle meshing. Additionally, point clouds and surface models were automatically generated from digital imagery from a Nikon D70 using the open-source software Bundler/PMVS2, free software VisualSFM, Autodesk Web Service 123D Catch beta, and low-cost software Agisoft PhotoScan. These outputs were compared with the results from terrestrial laser scanning. The point clouds and surface models derived from imagery could not achieve the same quality of geometrical accuracy as laser scanning (i.e. 1-2 cm).

  1. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    DEFF Research Database (Denmark)

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan;

    2002-01-01

    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales and market......The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...... and marketing, global engineering, and customer relationship management. The reference models are the basis for the development of ICT infrastructure requirements. These in turn can be used for ICT infrastructure specification (sometimes referred to as 'ICT architecture').Part of the ICT architecture...... is industry-wide, part of it is industry-specific and a part is specific to the domains of the joint activity that characterises the given Virtual Enterprise Network at hand. The article advocates a step by step approach to building virtual enterprise capability....

  2. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    Science.gov (United States)

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  3. Three-dimensional analysis of accuracy of patient-matched instrumentation in total knee arthroplasty: Evaluation of intraoperative techniques and postoperative alignment.

    Science.gov (United States)

    Kuwashima, Umito; Mizu-Uchi, Hideki; Okazaki, Ken; Hamai, Satoshi; Akasaki, Yukio; Murakami, Koji; Nakashima, Yasuharu

    2017-09-06

    It is questionable that the accuracies of patient-matched instrumentation (PMI) have been controversial, even though many surgeons follow manufacturers' recommendations. The purpose of this study was to evaluate the accuracy of intraoperative procedures and the postoperative alignment of the femoral side using PMI with 3-dimensional (3D) analysis. Eighteen knees that underwent total knee arthroplasty using MRI-based PMI were assessed. Intraoperative alignment and bone resection errors of the femoral side were evaluated with a CT-based navigation system. A conventional adjustable guide was used to compare cartilage data with that derived by PMI intraoperatively. Postoperative alignment was assessed using a 3D coordinate system with a computer-assisted design software. We also measured the postoperative alignments using conventional alignment guides with the 3D evaluation. Intraoperative coronal alignment with PMI was 90.9° ± 1.6°. Seventeen knees (94.4%) were within 3° of the optimal alignment. Intraoperative rotational alignment of the femoral guide position of PMI was 0.2° ± 1.6°compared with the adjustable guide, with 17 knees (94.4%) differing by 3° or less between the two methods. Maximum differences in coronal and rotation alignment before and after bone cutting were 2.0° and 2.8°, respectively. Postoperative coronal and rotational alignments were 89.4° ± 1.8° and -1.1° ± 1.3°, respectively. In both alignments, 94.4% of cases were within 3° of the optimal value. The PMI group had less outliers than conventional group in rotational alignment (p = 0.018). Our 3D analysis provided evidence that PMI system resulted in reasonably satisfactory alignments both intraoperatively and postoperatively. Surgeons should be aware that certain surgical techniques including bone cutting, and the associated errors may affect postoperative alignment despite accurate PMI positioning. Copyright © 2017 The Japanese Orthopaedic Association. Published by

  4. Review og pattern matching approaches

    DEFF Research Database (Denmark)

    Manfaat, D.; Duffy, Alex; Lee, B. S.

    1996-01-01

    This paper presents a review of pattern matching techniques. The application areas for pattern matching are extensive, ranging from CAD systems to chemical analysis and from manufacturing to image processing. Published techniques and methods are classified and assessed within the context of three...... key issues: pattern classes, similiarity types and mathing methods. It has been shown that the techniques and approaches are as diverse and varied as the applications....

  5. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  6. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  7. Gradient Matching Methods for Computational Inference in Mechanistic Models for Systems Biology: A Review and Comparative Analysis.

    Science.gov (United States)

    Macdonald, Benn; Husmeier, Dirk

    2015-01-01

    Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs), is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimized, so as to minimize some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.

  8. Decision Support Model for User Submission Approval Energy Partners Candidate Using Profile Matching Method and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Moedjiono Moedjiono

    2016-11-01

    Full Text Available In the field of services, customer satisfaction is a very important factor and determine the success of an enterprise. In the field of outsourcing, customer satisfaction indicator is the labor required delivery in a timely manner and has a level of quality in accordance with the terms proposed by the customer. To provide the best talent to customers, team recruitment and selection must perform a series of tests with a variety of methods to match the criteria of office given by the user with the criteria owned candidates and in order to support growth in graduation rates force a partner at the stage of user approval. For this purpose, the authors conducted a study with the method of observation, interviews, document reviews the candidate recruitment process, so as to provide recommendations for candidates with the highest quality delivery to the user at the stage of approval. The author put forward a model of decision support that is supported by the method of profile matching and Analytical Hierarchy Process (AHP in problem solving. The final results of this study can be used to support a decision in order to improve the effectiveness of the delivery of quality candidates, increase customer satisfaction, lower costs and improve gross operational margin of the company.

  9. Decision Support Model for User Submission Approval Energy Partners Candidate Using Profile Matching Method and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Moedjiono Moedjiono

    2016-11-01

    Full Text Available In the field of services, customer satisfaction is a very important factor and determine the success of an enterprise. In the field of outsourcing, customer satisfaction indicator is the labor required delivery in a timely manner and has a level of quality in accordance with the terms proposed by the customer. To provide the best talent to customers, team recruitment and selection must perform a series of tests with a variety of methods to match the criteria of office given by the user with the criteria owned candidates and in order to support growth in graduation rates force a partner at the stage of user approval. For this purpose, the authors conducted a study with the method of observation, interviews, document reviews the candidate recruitment process, so as to provide recommendations for candidates with the highest quality delivery to the user at the stage of approval. The author put forward a model of decision support that is supported by the method of profile matching and Analytical Hierarchy Process (AHP in problem solving. The final results of this study can be used to support a decision in order to improve the effectiveness of the delivery of quality candidates, increase customer satisfaction, lower costs and improve gross operational margin of the company.

  10. An Inspire-Konform 3d Building Model of Bavaria Using Cadastre Information, LIDAR and Image Matching

    Science.gov (United States)

    Roschlaub, R.; Batscheider, J.

    2016-06-01

    The federal governments of Germany endeavour to create a harmonized 3D building data set based on a common application schema (the AdV-CityGML-Profile). The Bavarian Agency for Digitisation, High-Speed Internet and Surveying has launched a statewide 3D Building Model with standardized roof shapes for all 8.1 million buildings in Bavaria. For the acquisition of the 3D Building Model LiDAR-data or data from Image Matching are used as basis in addition with the building ground plans of the official cadastral map. The data management of the 3D Building Model is carried out by a central database with the usage of a nationwide standardized CityGML-Profile of the AdV. The update of the 3D Building Model for new buildings is done by terrestrial building measurements within the maintenance process of the cadaster and from image matching. In a joint research project, the Bavarian State Agency for Surveying and Geoinformation and the TUM, Chair of Geoinformatics, transformed an AdV-CityGML-Profilebased test data set of Bavarian LoD2 building models into an INSPIRE-compliant schema. For the purpose of a transformation of such kind, the AdV provides a data specification, a test plan for 3D Building Models and a mapping table. The research project examined whether the transformation rules defined in the mapping table, were unambiguous and sufficient for implementing a transformation of LoD2 data based on the AdV-CityGML-Profile into the INSPIRE schema. The proof of concept was carried out by transforming production data of the Bavarian 3D Building Model in LoD2 into the INSPIRE BU schema. In order to assure the quality of the data to be transformed, the test specifications according to the test plan for 3D Building Models of the AdV were carried out. The AdV mapping table was checked for completeness and correctness and amendments were made accordingly.

  11. Learning graph matching.

    Science.gov (United States)

    Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J

    2009-06-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.

  12. Multistate models for comparing trends in hospitalizations among young adult survivors of colorectal cancer and matched controls

    Directory of Open Access Journals (Sweden)

    Sutradhar Rinku

    2012-10-01

    Full Text Available Abstract Background Over the past years, the incidence of colorectal cancer has been increasing among young adults. A large percentage of these patients live at least 5 years after diagnosis, but it is unknown whether their rate of hospitalizations after this 5-year mark is comparable to the general population. Methods This is a population-based cohort consisting of 917 young adult survivors diagnosed with colorectal cancer in Ontario from 1992–1999 and 4585 matched cancer-free controls. A multistate model is presented to reflect and compare trends in the hospitalization process among survivors and their matched controls. Results Analyses under a multistate model indicate that the risk of a subsequent hospital admission increases as the number of prior hospitalizations increases. Among patients who are yet to experience a hospitalization, the rate of admission is 3.47 times higher for YAS than controls (95% CI (2.79, 4.31. However, among patients that have experienced one and two hospitalizations, the relative rate of a subsequent admission decreases to 3.03 (95% CI (2.01, 4.56 and 1.90 (95% CI (1.19, 3.03, respectively. Conclusions Young adult survivors of colorectal cancer have an increased risk of experiencing hospitalizations compared to cancer-free controls. However this relative risk decreases as the number of prior hospitalizations increases. The multistate approach is able to use information on the timing of hospitalizations and answer questions that standard Poisson and Negative Binomial models are unable to address.

  13. Using crosswell data to enhance history matching

    KAUST Repository

    Ravanelli, Fabio M.

    2014-01-01

    One of the most challenging tasks in the oil industry is the production of reliable reservoir forecast models. Due to different sources of uncertainties in the numerical models and inputs, reservoir simulations are often only crude approximations of the reality. This problem is mitigated by conditioning the model with data through data assimilation, a process known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use of time-lapse data and advanced data assimilation techniques. One of the most promising data assimilation techniques employed in the industry is the ensemble Kalman filter (EnKF) because of its ability to deal with non-linear models at reasonable computational cost. In this paper we study the use of crosswell seismic data as an alternative to 4D seismic surveys in areas where it is not possible to re-shoot seismic. A synthetic reservoir model is used in a history matching study designed better estimate porosity and permeability distributions and improve the quality of the model to predict future field performance. This study is divided in three parts: First the use of production data only is evaluated (baseline for benchmark). Second the benefits of using production and 4D seismic data are assessed. Finally, a new conceptual idea is proposed to obtain time-lapse information for history matching. The use of crosswell time-lapse seismic tomography to map velocities in the interwell region is demonstrated as a potential tool to ensure survey reproducibility and low acquisition cost when compared with full scale surface surveys. Our numerical simulations show that the proposed method provides promising history matching results leading to similar estimation error reductions when compared with conventional history matched surface seismic data.

  14. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  15. Matching radiative transfer models and radiosonde data from the EPS/MetOp Sodankylä campaign to IASI measurements

    Directory of Open Access Journals (Sweden)

    X. Calbet

    2010-10-01

    Full Text Available Radiances observed from IASI are compared to calculated ones. Calculated radiances are obtained using several radiative transfer models (OSS, LBLRTM v11.3 and v11.6 on best estimates of the atmospheric state vectors. The atmospheric state vectors are derived from cryogenic frost point hygrometer and humidity dry bias corrected RS92 measurements flown on sondes launched 1 h and 5 min before IASI overpass time. The temperature and humidity profiles are finally obtained by interpolating or extrapolating these measurements to IASI overpass time. The IASI observed and calculated radiances match to within one sigma IASI instrument noise in the wavenumber, ν, range of 1500 ≤ ν ≤ 1570 and 1615 ≤ ν ≤ 1800 cm−1 .

  16. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  17. Transgastric endoscopic gastrojejunostomy using holing followed by interrupted suture technique in a porcine model

    Institute of Scientific and Technical Information of China (English)

    Su-Yu; Chen; Hong; Shi; Sheng-Jun; Jiang; Yong-Guang; Wang; Kai; Lin; Zhao-Fei; Xie; Xiao-Jing; Liu

    2015-01-01

    AIM: To demonstrate the feasibility and reproducibility of a pure natural orifice transluminal endoscopic surgery(NOTES) gastrojejunostomy using holing followed by interrupted suture technique using a single endoloop matched with a pair of clips in a non-survival porcine model.METHODS: NOTES gastrojejunostomy was performed on three female domestic pigs as follows: Gastrostomy, selection and retrieval of a free-floating loop of the small bowel into the stomach pouch, hold and exposure of the loop in the gastric cavity using a submucosal inflation technique, execution of a gastro-jejunal mucosal-seromuscular layer approximation using holing followed by interrupted suture technique with endoloop/clips, and full-thickness incision of the loop with a Dual knife.RESULTS: Pure NOTES side-to-side gastrojejunostomy was successfully performed in all three animals. No leakage was identified via methylene blue evaluation following surgery.CONCLUSION: This novel technique for preforming a gastrointestinal anastomosis exclusively by NOTES is technically feasible and reproducible in an animal model but warrants further improvement.

  18. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  19. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  20. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  1. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  2. Reducing the ambiguity of karst aquifer models by pattern matching of flow and transport on catchment scale

    Directory of Open Access Journals (Sweden)

    S. Oehlmann

    2014-08-01

    Full Text Available Assessing the hydraulic parameters of karst aquifers is a challenge due to their high degree of heterogeneity. The unknown parameter field generally leads to a high ambiguity for flow and transport calibration in numerical models of karst aquifers. In this study, a distributive numerical model was built for the simulation of groundwater flow and solute transport in a highly heterogeneous karst aquifer in south western Germany. Therefore, an interface for the simulation of solute transport in one-dimensional pipes was implemented into the software Comsol Multiphysics® and coupled to the three-dimensional solute transport interface for continuum domains. For reducing model ambiguity, the simulation was matched for steady-state conditions to the hydraulic head distribution in the model area, the spring discharge of several springs and the transport velocities of two tracer tests. Furthermore, other measured parameters such as the hydraulic conductivity of the fissured matrix and the maximal karst conduit volume were available for model calibration. Parameter studies were performed for several karst conduit geometries to analyse the influence of the respective geometric and hydraulic parameters and develop a calibration approach in a large-scale heterogeneous karst system. Results show that it is not only possible to derive a consistent flow and transport model for a 150 km2 karst area, but that the combined use of groundwater flow and transport parameters greatly reduces model ambiguity. The approach provides basic information about the conduit network not accessible for direct geometric measurements. The conduit network volume for the main karst spring in the study area could be narrowed down to approximately 100 000 m3.

  3. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  4. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  5. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  6. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  7. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  8. The Design and Implementation of the Matching Method of Chimes Dance and Music for Growth of Edible Mushrooms Based on Hevner Music Emotion Model

    Directory of Open Access Journals (Sweden)

    Qi Zhou

    2014-10-01

    Full Text Available This study aims to investigate the synchronization mechanism for music and dance in the production encouragement system of edible fungus. The synchronization between the music and dance may significantly influence the stimulating effect on the growth of edible fungus. However, very limited work has been done to address this issue. To deal with the synchronization problem of music and dance, the Hevner music emotion model based on adjective circle is proposed in this study to achieve matching mechanism of audio streams and video. By doing so, the proposed algorithm can improve the synchronization between the music and dance. In emotion driven model with the theoretical basis of Hevner emotion ring, the music matches to the dance successfully, which can not only provide matching method for chimes-driven dance editing system, but will also provide a simple and feasible matching pattern for any other applications.

  9. Optimal affine-invariant matching: performance characterization

    Science.gov (United States)

    Costa, Mauro S.; Haralick, Robert M.; Shapiro, Linda G.

    1992-04-01

    The geometric hashing scheme proposed by Lamdan and Wolfson can be very efficient in a model-based matching system, not only in terms of the computational complexity involved, but also in terms of the simplicity of the method. In a recent paper, we discussed errors that can occur with this method due to quantization, stability, symmetry, and noise problems. These errors make the original geometric hashing technique unsuitable for use on the factory floor. Beginning with an explicit noise model, which the original Lamdan and Wolfson technique lacks, we derived an optimal approach that overcomes these problems. We showed that the results obtained with the new algorithm are clearly better than the results from the original method. This paper addresses the performance characterization of the geometric hashing technique, more specifically the affine-invariant point matching, applied to the problem of recognizing and determining the pose of sheet metal parts. The experiments indicate that with a model having 10 to 14 points, with 2 points of the model undetected and 10 extraneous points detected, and with the model points perturbed by Gaussian noise of standard deviation 3 (0.58 of range), the average amount of computation required to obtain an answer is equivalent to trying 11 of the possible three-point bases. The misdetection rate, measured by the percentage of correct bases matches that fail to verify, is 0.9. The percentage of incorrect bases that successfully produced a match that did verify (false alarm rate) is 13. And, finally, 2 of the experiments failed to find a correct match and verify it. Results for experiments with real images are also presented.

  10. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  11. Modelled hydraulic redistribution by sunflower (Helianthus annuus L.) matches observed data only after including night-time transpiration.

    Science.gov (United States)

    Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele

    2014-04-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR.

  12. Color-Matching and Blending-Effect of Universal Shade Bulk-Fill-Resin-Composite in Resin-Composite-Models and Natural Teeth.

    Science.gov (United States)

    Abdelraouf, Rasha M; Habib, Nour A

    2016-01-01

    Objectives. To assess visually color-matching and blending-effect (BE) of a universal shade bulk-fill-resin-composite placed in resin-composite-models with different shades and cavity sizes and in natural teeth (extracted and patients' teeth). Materials and Methods. Resin-composite-discs (10 mm × 1 mm) were prepared of universal shade composite and resin-composite of shades: A1, A2, A3, A3.5, and A4. Spectrophotometric-color-measurement was performed to calculate color-difference (ΔE) between the universal shade and shaded-resin-composites discs and determine their translucency-parameter (TP). Visual assessment was performed by seven normal-color-vision-observers to determine the color-matching between the universal shade and each shade, under Illuminant D65. Color-matching visual scoring (VS) values were expressed numerically (1-5): 1: mismatch/totally unacceptable, 2: Poor-Match/hardly acceptable, 3: Good-Match/acceptable, 4: Close-Match/small-difference, and 5: Exact-Match/no-color-difference. Occlusal cavities of different sizes were prepared in teeth-like resin-composite-models with shades A1, A2, A3, A3.5, and A4. The cavities were filled by the universal shade composite. The same scale was used to score color-matching between the fillings and composite-models. BE was calculated as difference in mean-visual-scores in models and that of discs. Extracted teeth with two different class I-cavity sizes as well as ten patients' lower posterior molars with occlusal caries were prepared, filled by universal shade composite, and assessed similarly. Results. In models, the universal shade composite showed close matching in the different cavity sizes and surrounding shades (4 ≤ VS composite showed good-matching (VS = 3-3.3, BE = -0.9-2.1). Conclusions. Color-matching of universal shade resin-composite was satisfactory rather than perfect in patients' teeth.

  13. Numerical modelling of GPR ground-matching enhancement by a chirped multilayer structure - output of cooperation within COST Action TU1208

    Science.gov (United States)

    Baghdasaryan, Hovik V.; Knyazyan, Tamara M.; Hovhannisyan, Tamara. T.; Marciniak, Marian; Pajewski, Lara

    2016-04-01

    As is well know, Ground Penetrating Radar (GPR) is an electromagnetic technique for the detection and imaging of buried objects, with resolution ranging from centimeters to few meters [1, 2]. Though this technique is mature enough and different types of GPR devices are already in use, some problems are still waiting for their solution [3]. One of them is to achieve a better matching of transmitting GPR antenna to the ground, that will increase the signal penetration depth and the signal/noise ratio at the receiving end. In the current work, a full-wave electromagnetic modelling of the interaction of a plane wave with a chirped multilayered structure on the ground is performed, via numerical simulation. The method of single expression is used, which is a suitable technique for multi-boundary problems solution [4, 5]. The considered multilayer consists of two different dielectric slabs of low and high permittivity, where the highest value of permittivity doesn't exceed the permittivity of the ground. The losses in the ground are suitably taken into account. Two types of multilayers are analysed. Numerical results are obtained for the reflectance from the structure, as well as for the distributions of electric field components and power flow density in both the considered structures and the ground. The obtained results indicate that, for a better matching with the ground, the layer closer to the ground should be the high-permittivity one. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). Part of this work was developed during the Short-Term Scientific Mission COST-STSM-TU1208-25016, carried out by Prof. Baghdasaryan in the National Institute of Telecommunications in Warsaw, Poland. References [1] H. M. Jol. Ground Penetrating Radar: Theory and Applications. Elsevier, 2009. 509 pp. [2] R. Persico. Introduction to

  14. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  15. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  16. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  17. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  18. Dynamic analysis of structures with elastomers using substructuring with non-matched interfaces and improved modeling of elastomer properties

    Science.gov (United States)

    Lin, Hejie

    A variety of engineering structures are composed of linear structural components connected by elastomers. The components are commonly analyzed using large-scale finite element models. Examples include engine crankshafts with torsional dampers, engine structures with an elastomeric gasket between the head and the block, engine-vehicle structures using elastomeric engine mounts, etc. An analytical method is presented in this research for the dynamic analysis of large-scale structures with elastomers. The dissertation has two major parts. In the first part, a computationally efficient substructuring method is developed for substructures with non-matched interface meshes. The method is based on the conventional fixed-interface, Craig-Bampton component mode synthesis (CMS) method. However, its computational efficiency is greatly enhanced with the introduction of interface modes. Kriging interpolation at the interfaces between substructures ensures compatibility of deformation. In the second part, a series of dynamic measurements of mechanical properties of elastomers is presented. Dynamic stiffness as a function of frequency under controlled temperature and vibrational amplitude is measured. Also, the strain and stress relaxation behavior is tested to investigate the linearity and histeresis of an elastomer. The linearity of dynamic stiffness is studied and discussed in detail through the strain and stress relaxation test. The dynamic stiffness of elastomers is measured at different conditions such as temperature, frequency, and amplitude. The relationships between dynamic stiffness and temperature, and frequency and amplitude are discussed. After the dynamic properties of an elastomer are measured, a mathematical model is presented for characterizing the frequency and temperature-dependent properties of elastomers from the fundamental features of the molecular chains forming them. Experimental observations are used in the model development to greatly enhance the

  19. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  20. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  1. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  2. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included.

  3. Effect of Prophylactic Antifungal Protocols on the Prognosis of Liver Transplantation: A Propensity Score Matching and Multistate Model Approach

    Science.gov (United States)

    Chen, Yi-Chan; Wang, Yu-Chao; Lee, Chen-Fang; Wu, Ting-Jun; Chou, Hong-Shiue; Chan, Kun-Ming; Lee, Wei-Chen

    2016-01-01

    Background. Whether routine antifungal prophylaxis decreases posttransplantation fungal infections in patients receiving orthotopic liver transplantation (OLT) remains unclear. This study aimed to determine the effectiveness of antifungal prophylaxis for patients receiving OLT. Patients and Methods. This is a retrospective analysis of a database at Chang Gung Memorial Hospital. We have been administering routine antibiotic and prophylactic antifungal regimens to recipients with high model for end-stage liver disease scores (>20) since 2009. After propensity score matching, 402 patients were enrolled. We conducted a multistate model to analyze the cumulative hazards, probability of fungal infections, and risk factors. Results. The cumulative hazards and transition probability of “transplantation to fungal infection” were lower in the prophylaxis group. The incidence rate of fungal infection after OLT decreased from 18.9% to 11.4% (p = 0.052); overall mortality improved from 40.8% to 23.4% (p < 0.001). In the “transplantation to fungal infection” transition, prophylaxis was significantly associated with reduced hazards for fungal infection (hazard ratio: 0.57, 95% confidence interval: 0.34–0.96, p = 0.033). Massive ascites, cadaver transplantation, and older age were significantly associated with higher risks for mortality. Conclusion. Prophylactic antifungal regimens in high-risk recipients might decrease the incidence of posttransplant fungal infections.

  4. Effect of Prophylactic Antifungal Protocols on the Prognosis of Liver Transplantation: A Propensity Score Matching and Multistate Model Approach

    Directory of Open Access Journals (Sweden)

    Yi-Chan Chen

    2016-01-01

    Full Text Available Background. Whether routine antifungal prophylaxis decreases posttransplantation fungal infections in patients receiving orthotopic liver transplantation (OLT remains unclear. This study aimed to determine the effectiveness of antifungal prophylaxis for patients receiving OLT. Patients and Methods. This is a retrospective analysis of a database at Chang Gung Memorial Hospital. We have been administering routine antibiotic and prophylactic antifungal regimens to recipients with high model for end-stage liver disease scores (>20 since 2009. After propensity score matching, 402 patients were enrolled. We conducted a multistate model to analyze the cumulative hazards, probability of fungal infections, and risk factors. Results. The cumulative hazards and transition probability of “transplantation to fungal infection” were lower in the prophylaxis group. The incidence rate of fungal infection after OLT decreased from 18.9% to 11.4% (p=0.052; overall mortality improved from 40.8% to 23.4% (p<0.001. In the “transplantation to fungal infection” transition, prophylaxis was significantly associated with reduced hazards for fungal infection (hazard ratio: 0.57, 95% confidence interval: 0.34–0.96, p=0.033. Massive ascites, cadaver transplantation, and older age were significantly associated with higher risks for mortality. Conclusion. Prophylactic antifungal regimens in high-risk recipients might decrease the incidence of posttransplant fungal infections.

  5. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  6. Fractured reservoir history matching improved based on artificial intelligent

    Directory of Open Access Journals (Sweden)

    Sayyed Hadi Riazi

    2016-12-01

    Full Text Available In this paper, a new robust approach based on Least Square Support Vector Machine (LSSVM as a proxy model is used for an automatic fractured reservoir history matching. The proxy model is made to model the history match objective function (mismatch values based on the history data of the field. This model is then used to minimize the objective function through Particle Swarm Optimization (PSO and Imperialist Competitive Algorithm (ICA. In automatic history matching, sensitive analysis is often performed on full simulation model. In this work, to get new range of the uncertain parameters (matching parameters in which the objective function has a minimum value, sensitivity analysis is also performed on the proxy model. By applying the modified ranges to the optimization methods, optimization of the objective function will be faster and outputs of the optimization methods (matching parameters are produced in less time and with high precision. This procedure leads to matching of history of the field in which a set of reservoir parameters is used. The final sets of parameters are then applied for the full simulation model to validate the technique. The obtained results show that the present procedure in this work is effective for history matching process due to its robust dependability and fast convergence speed. Due to high speed and need for small data sets, LSSVM is the best tool to build a proxy model. Also the comparison of PSO and ICA shows that PSO is less time-consuming and more effective.

  7. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  8. Automated 3D modelling of buildings from aerial and space imagery using image understanding techniques

    Science.gov (United States)

    Kim, Taejung

    The development of a fully automated mapping system is one of the fundamental goals in photogrammetry and remote sensing. As an approach towards this goal, this thesis describes the work carried out in the automated 3D modelling of buildings in urban scenes. The whole work is divided into three parts: the development of an automated height extraction system, the development of an automated building detection system, and the combination of these two systems. After an analysis of the key problems of urban-area imagery for stereo matching, buildings were found to create isolated regions and blunders. From these findings, an automated building height extraction system was developed. This stereoscopic system is based on a pyramidal (area-based) matching algorithm with automatic seed points and a tile-based control strategy. To remove possible blunders and extract buildings from other background objects, a series of "smart" operations using linear elements from buildings were also applied. A new monoscopic building detection system was developed based on a graph constructed from extracted lines and their relations. After extracting lines from a single image using low-level image processing techniques, line relations are searched for and a graph constructed. By finding closed loops in the graph, building hypotheses are generated. These are then merged and verified using shadow analysis and perspective geometry. After verification, each building hypothesis indicates either a building or a part of a building. By combining results from these two systems, 3D building roofs can be modelled automatically. The modelling is performed using height information obtained from the height extraction system and interpolation boundaries obtained from the building detection system. Other fusion techniques and the potential improvements due to these are also discussed. Quantitative analysis was performed for each algorithm presented in this thesis and the results support the newly

  9. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  10. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  11. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  12. Derivatives of Matching.

    Science.gov (United States)

    Herrnstein, R. J.

    1979-01-01

    The matching law for reinforced behavior solves a differential equation relating infinitesimal changes in behavior to infinitesimal changes in reinforcement. The equation expresses plausible conceptions of behavior and reinforcement, yields a simple nonlinear operator model for acquisition, and suggests a alternative to the economic law of…

  13. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  14. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  15. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  16. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  17. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  18. Genetic Programming Framework for Fingerprint Matching

    CERN Document Server

    Ismail, Ismail A; Abd-ElWahid, Mohammed A; ElKafrawy, Passent M; Nasef, Mohammed M

    2009-01-01

    A fingerprint matching is a very difficult problem. Minutiae based matching is the most popular and widely used technique for fingerprint matching. The minutiae points considered in automatic identification systems are based normally on termination and bifurcation points. In this paper we propose a new technique for fingerprint matching using minutiae points and genetic programming. The goal of this paper is extracting the mathematical formula that defines the minutiae points.

  19. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  20. Using medical history embedded in biometrics medical card for user identity authentication: privacy preserving authentication model by features matching.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan

    2012-01-01

    Many forms of biometrics have been proposed and studied for biometrics authentication. Recently researchers are looking into longitudinal pattern matching that based on more than just a singular biometrics; data from user's activities are used to characterise the identity of a user. In this paper we advocate a novel type of authentication by using a user's medical history which can be electronically stored in a biometric security card. This is a sequel paper from our previous work about defining abstract format of medical data to be queried and tested upon authentication. The challenge to overcome is preserving the user's privacy by choosing only the useful features from the medical data for use in authentication. The features should contain less sensitive elements and they are implicitly related to the target illness. Therefore exchanging questions and answers about a few carefully chosen features in an open channel would not easily or directly expose the illness, but yet it can verify by inference whether the user has a record of it stored in his smart card. The design of a privacy preserving model by backward inference is introduced in this paper. Some live medical data are used in experiments for validation and demonstration.

  1. A second-order, perfectly matched layer formulation to model 3D transient wave propagation in anisotropic elastic media

    CERN Document Server

    Assi, Hisham

    2016-01-01

    Numerical simulation of wave propagation in an infinite medium is made possible by surrounding a finite region by a perfectly matched layer (PML). Using this approach a generalized three-dimensional (3D) formulation is proposed for time-domain modeling of elastic wave propagation in an unbounded lossless anisotropic medium. The formulation is based on a second-order approach that has the advantages of, physical relationship to the underlying equations, and amenability to be implemented in common numerical schemes. Specifically, our formulation uses three second-order equations of the displacement field and nine auxiliary equations, along with the three time histories of the displacement field. The properties of the PML, which are controlled by a complex two-parameter stretch function, are such that it acts as near perfect absorber. Using finite element method (FEM) 3D numerical results are presented for a highly anisotropic medium. An extension of the formulation to the particular case of a Kelvin-Vogit visco...

  2. Detection and Counting of Orchard Trees from Vhr Images Using a Geometrical-Optical Model and Marked Template Matching

    Science.gov (United States)

    Maillard, Philippe; Gomes, Marília F.

    2016-06-01

    This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have < 1 meter spatial resolution and were downloaded from the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

  3. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  4. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  5. Modeling of guided circumferential SH and Lamb-type waves in open waveguides with semi-analytical finite element and Perfectly Matched Layer method

    Science.gov (United States)

    Matuszyk, Paweł J.

    2017-01-01

    The circumferential guided waves (CGW) are of increasing interest for non-destructive inspecting pipes or other cylindrical structures. If such structures are buried underground, these modes can also deliver some valuable information about the surrounding medium or the quality of the contact between the pipe and the embedding medium. Toward this goal, the detailed knowledge of the dispersive characteristics of CGW is required; henceforth, the robust numerical method has to be established, which allows for the extensive study of the propagation of these modes under different loading conditions. Mathematically, this is the problem of the propagation of guided waves in an open waveguide. This problem differs significantly from the corresponding problem of a closed waveguide both in physical and numerical aspect. The paper presents a combination of semi-analytical finite element method with Perfectly Matched Layer technique for a class of coupled acoustics/elasticity problems with application to modeling of CGW. We discuss different aspects of our algorithm and validate the proposed approach against other established methods available in the literature. The presented numerical examples positively verify the robustness of the proposed method.

  6. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  7. Model Calibration of a Groundwater Flow Analysis for an Underground Structure Using Data Assimilation Technique

    Science.gov (United States)

    Yamamoto, S.; Honda, M.; Sakurai, H.

    2015-12-01

    Model calibration of groundwater flow analysis is a difficult task, especially in the complicated hydrogeological condition, because available information about hydrogeological properties is very limited. This often causes non-negligible differences between predicted results and real observations. We applied the Ensemble Kalman Filter (EnKF), which is a type of data assimilation technique, to groundwater flow simulation in order to obtain a valid model that can reproduce accurately the observations. Unlike conventional manual calibration, this scheme not only makes the calibration work efficient but also provides an objective approach not depending on the skills of engineers.In this study, we focused on estimating hydraulic conductivities of bedrocks and fracture zones around an underground fuel storage facility. Two different kinds of groundwater monitoring data were sequentially assimilated into the unsteady groundwater flow model via the EnKF.Synthetic test results showed that estimated hydraulic conductivities matched their true values and our method works well in groundwater flow analysis. Further, influences of each observation in the state updating process were quantified through sensitivity analysis.To assess the feasibility under practical conditions, the assimilation experiments using real field measurements were performed. The results showed that the identified model was able to approximately simulate the behavior of groundwater flow. On the other hand, it was difficult to reproduce the observation data correctly in a specific local area. This suggests that inaccurate area is included in the assumed hydrogeological conceptual model of this site, and could be useful information for the model validation.

  8. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  9. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  10. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Directory of Open Access Journals (Sweden)

    Yang-Cheng Lin

    2012-01-01

    Full Text Available How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers’ perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique, and neural networks (the nonlinear modeling technique to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers’ perception of product image and product form elements of personal digital assistants (PDAs. The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  11. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  12. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  13. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  14. 模板匹配算法的两种实现方法比较%A comparison of two methods for model matching algorithm

    Institute of Scientific and Technical Information of China (English)

    谢方方; 杨文飞; 陈静; 李芳; 于越

    2012-01-01

    Model matching algorithm is commonly applied to the system of image matching and the system of video tracking. It analyzed in detail the efficiencies through the two methods of VS2010 and System Generator are compared. The simulation results show that the model matching algorithm based on System Generator is more efficient with shorter time.%模板匹配算法是图像配准和视频跟踪等系统中常用的一种算法,首先对它进行了详细的分析研究,在此基础上比较了它在VS2010和System Generator两种环境下实现的性能指标.实验结果表明,基于System Generator环境的模板匹配算法效率更高,开发周期更短.

  15. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  16. History matching through dynamic decision-making.

    Science.gov (United States)

    Cavalcante, Cristina C B; Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson

    2017-01-01

    History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a 'learning-from-data' approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark.

  17. Resurgence matches quantization

    Science.gov (United States)

    Couso-Santamaría, Ricardo; Mariño, Marcos; Schiappa, Ricardo

    2017-04-01

    The quest to find a nonperturbative formulation of topological string theory has recently seen two unrelated developments. On the one hand, via quantization of the mirror curve associated to a toric Calabi–Yau background, it has been possible to give a nonperturbative definition of the topological-string partition function. On the other hand, using techniques of resurgence and transseries, it has been possible to extend the string (asymptotic) perturbative expansion into a transseries involving nonperturbative instanton sectors. Within the specific example of the local {{{P}}2} toric Calabi–Yau threefold, the present work shows how the Borel–Padé–Écalle resummation of this resurgent transseries, alongside occurrence of Stokes phenomenon, matches the string-theoretic partition function obtained via quantization of the mirror curve. This match is highly non-trivial, given the unrelated nature of both nonperturbative frameworks, signaling at the existence of a consistent underlying structure.

  18. Resurgence Matches Quantization

    CERN Document Server

    Couso-Santamaría, Ricardo; Schiappa, Ricardo

    2016-01-01

    The quest to find a nonperturbative formulation of topological string theory has recently seen two unrelated developments. On the one hand, via quantization of the mirror curve associated to a toric Calabi-Yau background, it has been possible to give a nonperturbative definition of the topological-string partition function. On the other hand, using techniques of resurgence and transseries, it has been possible to extend the string (asymptotic) perturbative expansion into a transseries involving nonperturbative instanton sectors. Within the specific example of the local P2 toric Calabi-Yau threefold, the present work shows how the Borel-Pade-Ecalle resummation of this resurgent transseries, alongside occurrence of Stokes phenomenon, matches the string-theoretic partition function obtained via quantization of the mirror curve. This match is highly non-trivial, given the unrelated nature of both nonperturbative frameworks, signaling at the existence of a consistent underlying structure.

  19. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  20. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  1. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  2. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  3. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  4. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  5. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; ptechnique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was

  6. Universal or Specific? A Modeling-Based Comparison of Broad-Spectrum Influenza Vaccines against Conventional, Strain-Matched Vaccines.

    Directory of Open Access Journals (Sweden)

    Rahul Subramanian

    2016-12-01

    Full Text Available Despite the availability of vaccines, influenza remains a major public health challenge. A key reason is the virus capacity for immune escape: ongoing evolution allows the continual circulation of seasonal influenza, while novel influenza viruses invade the human population to cause a pandemic every few decades. Current vaccines have to be updated continually to keep up to date with this antigenic change, but emerging 'universal' vaccines-targeting more conserved components of the influenza virus-offer the potential to act across all influenza A strains and subtypes. Influenza vaccination programmes around the world are steadily increasing in their population coverage. In future, how might intensive, routine immunization with novel vaccines compare against similar mass programmes utilizing conventional vaccines? Specifically, how might novel and conventional vaccines compare, in terms of cumulative incidence and rates of antigenic evolution of seasonal influenza? What are their potential implications for the impact of pandemic emergence? Here we present a new mathematical model, capturing both transmission dynamics and antigenic evolution of influenza in a simple framework, to explore these questions. We find that, even when matched by per-dose efficacy, universal vaccines could dampen population-level transmission over several seasons to a greater extent than conventional vaccines. Moreover, by lowering opportunities for cross-protective immunity in the population, conventional vaccines could allow the increased spread of a novel pandemic strain. Conversely, universal vaccines could mitigate both seasonal and pandemic spread. However, where it is not possible to maintain annual, intensive vaccination coverage, the duration and breadth of immunity raised by universal vaccines are critical determinants of their performance relative to conventional vaccines. In future, conventional and novel vaccines are likely to play complementary roles in

  7. OVERPREDICTION OF FEAR IN PANIC DISORDER PATIENTS WITH AGORAPHOBIA - DOES THE (MIS)MATCH MODEL GENERALIZE TO EXPOSURE IN-VIVO THERAPY

    NARCIS (Netherlands)

    VANHOUT, WJPJ; EMMELKAMP, PMG

    1994-01-01

    The purpose of this study was to test the (mis)match model of Rachman and co-workers during real life exposure therapy in panic disorder patients with agoraphobic avoidance. The results showed that although the patients tended to overpredict their expected fear before the exposure sessions, their pr

  8. Overprediction of fear in panic disorder patients with agoraphobia: Does the (mis)match model generalize to exposure in vivo therapy?

    NARCIS (Netherlands)

    van Hout, W.J.P.J.; Emmelkamp, P.M.G.

    1994-01-01

    The purpose of this study was to test the (mis)match model of Rachman and co-workers during real life exposure therapy in panic disorder patients with agoraphobic avoidance. The results showed that although the patients tended to overpredict their expected fear before the exposure sessions, their pr

  9. Iris Matching Based On a Stack Like Structure Graph Approach

    Directory of Open Access Journals (Sweden)

    Roushdi Mohamed FAROUK

    2012-12-01

    Full Text Available In this paper, we present the elastic bunch graph matching as a new approach for iris recognition. The task is difficult because of iris variation in terms of position, size, and partial occlusion. We have used the circular Hough transform to determine the iris boundaries. Individual segmented irises are represented as labeled graphs. We have combined a representative set of individual model graphs into a stack like structure called an iris bunch graph (IBG. Finally, a bunch graph similarity function is proposed to compare a test graph with the IBG. Recognition results are given for galleries of irises from CASIA version and UBIRIS databases. The numerical results show that, the elastic bunch graph matching is an effective technique for iris matching. We also compare our results with previous results and find that, the elastic bunch graph matching is an effective matching performance.

  10. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  11. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  12. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  13. Pattern recognition and string matching

    CERN Document Server

    Cheng, Xiuzhen

    2002-01-01

    The research and development of pattern recognition have proven to be of importance in science, technology, and human activity. Many useful concepts and tools from different disciplines have been employed in pattern recognition. Among them is string matching, which receives much theoretical and practical attention. String matching is also an important topic in combinatorial optimization. This book is devoted to recent advances in pattern recognition and string matching. It consists of twenty eight chapters written by different authors, addressing a broad range of topics such as those from classifica­ tion, matching, mining, feature selection, and applications. Each chapter is self-contained, and presents either novel methodological approaches or applications of existing theories and techniques. The aim, intent, and motivation for publishing this book is to pro­ vide a reference tool for the increasing number of readers who depend upon pattern recognition or string matching in some way. This includes student...

  14. Intermixing of InGaAs-InGaAs lattice-matched and strained quantum well structures using pre-annealing enhanced defects diffusion technique

    Science.gov (United States)

    Wang, Ruiyu; Shi, Yuan; Ooi, Boon Siew

    2005-01-01

    The ability to create multiple-wavelength chip with high spatial bandgap selectively across a III-V semiconductor wafer for monolithic photonic integration using a simple postgrowth bandgap engineering process such as quantum well intermixing (QWI) is highly advantageous and desired. Preferably, this process should not result in drastic change in both optical and electrical properties of the processed material. In addition, the process should also give high reproducibility to both lattice-matched and strained quantum well (QW) structures. In this paper, we report a new method that meets most of these requirements. This process is performed by first implanting the InGaAs/InGaAsP laser structures using phosphorous ion at 300 keV prior to QWI, the samples were pre-annealed at 600°C for 20 min. Subsequently the annealing temperature was ramped to 700°C and stayed constant for 120s for QWI. Blue bandgap shift of over 140 nm, relative to the as grown and control samples, has been obtained from the strained InGaAs-InGaAsP laser structure. Using this process, devices such as bandgap tuned lasers, multiple-section device such as integrated optically amplified photodetector have been demonstrated.

  15. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  16. Targeted Therapy Database (TTD: a model to match patient's molecular profile with current knowledge on cancer biology.

    Directory of Open Access Journals (Sweden)

    Simone Mocellin

    Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method

  17. Targeted Therapy Database (TTD): A Model to Match Patient's Molecular Profile with Current Knowledge on Cancer Biology

    Science.gov (United States)

    Mocellin, Simone; Shrager, Jeff; Scolyer, Richard; Pasquali, Sandro; Verdi, Daunia; Marincola, Francesco M.; Briarava, Marta; Gobbel, Randy; Rossi, Carlo; Nitti, Donato

    2010-01-01

    Background The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. Objective To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. Methods To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. Results and Conclusions We created a manually annotated database (Targeted Therapy Database, TTD) where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method developed to fully exploit

  18. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  19. Matching post-Newtonian and numerical relativity waveforms: systematic errors and a new phenomenological model for non-precessing black hole binaries

    CERN Document Server

    Santamaria, L; Ajith, P; Bruegmann, B; Dorband, N; Hannam, M; Husa, S; Moesta, P; Pollney, D; Reisswig, C; Seiler, J; Krishnan, B

    2010-01-01

    We present a new phenomenological gravitational waveform model for he inspiral and coalescence of non-precessing spinning black hole binaries. Our approach is based on a frequency domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of non-precessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational wave searches.

  20. An Efficient Pattern Matching Algorithm

    Science.gov (United States)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  1. Sample evaluation of ontology-matching systems

    NARCIS (Netherlands)

    Hage, W.R. van; Isaac, A.; Aleksovski, Z.

    2007-01-01

    Ontology matching exists to solve practical problems. Hence, methodologies to find and evaluate solutions for ontology matching should be centered on practical problems. In this paper we propose two statistically-founded evaluation techniques to assess ontology-matching performance that are based on

  2. Sample evaluation of ontology-matching systems

    NARCIS (Netherlands)

    Hage, W.R. van; Isaac, A.; Aleksovski, Z.

    2007-01-01

    Ontology matching exists to solve practical problems. Hence, methodologies to find and evaluate solutions for ontology matching should be centered on practical problems. In this paper we propose two statistically-founded evaluation techniques to assess ontology-matching performance that are based on

  3. Probability-Based Pattern Recognition and Statistical Framework for Randomization: Modeling Tandem Mass Spectrum/Peptide Sequence False Match Frequencies

    Science.gov (United States)

    Estimating and controlling the frequency of false matches between a peptide tandem mass spectrum and candidate peptide sequences is an issue pervading proteomics research. To solve this problem, we designed an unsupervised pattern recognition algorithm for detecting patterns with various lengths fr...

  4. Hybrid Schema Matching for Deep Web

    Science.gov (United States)

    Chen, Kerui; Zuo, Wanli; He, Fengling; Chen, Yongheng

    Schema matching is the process of identifying semantic mappings, or correspondences, between two or more schemas. Schema matching is a first step and critical part of data integration. For schema matching of deep web, most researches only interested in query interface, while rarely pay attention to abundant schema information contained in query result pages. This paper proposed a mixed schema matching technique, which combines attributes that appeared in query structures and query results of different data sources, and mines the matched schemas inside. Experimental results prove the effectiveness of this method for improving the accuracy of schema matching.

  5. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  6. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  7. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  8. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  9. Meningkatkan Aktivitas Belajar Siswa dengan Menggunakan Model Make A Match Pada Mata Pelajaran Matematika di Kelas V SDN 050687 Sawit Seberang

    Directory of Open Access Journals (Sweden)

    Daitin Tarigan

    2014-06-01

    Full Text Available AbstrakPenelitian ini bertujuan untuk mengetahui aktivitas belajar siswa pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal dan sebaliknya dengan menggunakan model make a match di kelas V SD Negeri 050687 Sawit Seberang T.A 2013/2014. Jenis penelitian ini adalah Penelitian Tindakan Kelas (PTK dengan alat pengumpulan data yang digunakan adalah lembar observasi aktivitas guru dan siswa. Berdasarkan analisis data diperoleh hasil pada siklus I Pertemuan I skor aktivitas guru adalah 82,14 dengan kriteria baik dan aktivitas belajar dalah aktif. Tindakan dilanjutkan sampai dengan siklus ke II. Pada pertemuan II siklus II skor aktivitas guru adalah 96,42 dengan kriteria sangat baik dan aktivitas belajar klasikal adalah sangat aktif. Dari hasil tersebut dapat diambil kesimpulan bahwa tindakan penelitian berhasil karena nilai indikator aktivitas belajar siswa dan jumlah siswa yang dinyatakan aktif secara klasikal telah mencapai 80%. Dengan demikian maka penggunaan model make a match dapat meningkatkan aktivitas belajar siswa di kelas V SD Negeri 050687 Sawit Seberang pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal. Kata Kunci:      Model Make a Match; Aktivitas Belajar Siswa  AbstractThis reseach aim is to know the student activity on Math at topic change the fraction into percent, desimal and vice versa, using make a match model on fifth grade of SDN 050687 Sawit Seberang 2013/2014. This is a classroom action research which is used activity observrvation sheet as its instrumen of collecting data. From the analisys of data, it is got result as follows: on cycle I meet I, teacher activity score is 82,14, which was mean good, and learning activity was active. The action and then continued until second cycle. On the meet II cylce II, it was got teacher activity score is 96,42, which was mean very good, and clasical learning activity was very active. Based on the result, it was conclude

  10. Synthetic aperture radar imaging based on attributed scatter model using sparse recovery techniques

    Institute of Scientific and Technical Information of China (English)

    苏伍各; 王宏强; 阳召成

    2014-01-01

    The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters’ positions among a much large number of potential scatters’ positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed l0 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.

  11. Study of Semi-Span Model Testing Techniques

    Science.gov (United States)

    Gatlin, Gregory M.; McGhee, Robert J.

    1996-01-01

    An investigation has been conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel in order to further the development of semi-span testing capabilities. A twin engine, energy efficient transport (EET) model with a four-element wing in a takeoff configuration was used for this investigation. Initially a full span configuration was tested and force and moment data, wing and fuselage surface pressure data, and fuselage boundary layer measurements were obtained as a baseline data set. The semi-span configurations were then mounted on the wind tunnel floor, and the effects of fuselage standoff height and shape as well as the effects of the tunnel floor boundary layer height were investigated. The effectiveness of tangential blowing at the standoff/floor juncture as an active boundary-layer control technique was also studied. Results indicate that the semi-span configuration was more sensitive to variations in standoff height than to variations in floor boundary layer height. A standoff height equivalent to 30 percent of the fuselage radius resulted in better correlation with full span data than no standoff or the larger standoff configurations investigated. Undercut standoff leading edges or the use of tangential blowing in the standoff/ floor juncture improved correlation of semi-span data with full span data in the region of maximum lift coefficient.

  12. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  13. Apfel's excellent match

    Science.gov (United States)

    1997-01-01

    Apfel's excellent match: This series of photos shows a water drop containing a surfactant (Triton-100) as it experiences a complete cycle of superoscillation on U.S. Microgravity Lab-2 (USML-2; October 1995). The time in seconds appears under the photos. The figures above the photos are the oscillation shapes predicted by a numerical model. The time shown with the predictions is nondimensional. Robert Apfel (Yale University) used the Drop Physics Module on USML-2 to explore the effect of surfactants on liquid drops. Apfel's research of surfactants may contribute to improvements in a variety of industrial processes, including oil recovery and environmental cleanup.

  14. Outsourced pattern matching

    DEFF Research Database (Denmark)

    Faust, Sebastian; Hazay, Carmit; Venturi, Daniele

    2013-01-01

    on concrete and important functionalities and give the first protocol for the pattern matching problem in the cloud. Loosely speaking, this problem considers a text T that is outsourced to the cloud S by a client C T . In a query phase, clients C 1, …, C l run an efficient protocol with the server S...... that contain confidential data (e.g., health related data about patient history). Our constructions offer simulation-based security in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to O(m) bits plus the number of occurrences...

  15. Generalized Orthogonal Matching Pursuit

    CERN Document Server

    Wang, Jian; Shim, Byonghyo

    2011-01-01

    As a greedy algorithm to recover sparse signals from compressed measurements, the orthogonal matching pursuit (OMP) algorithm has received much attention in recent years. In this paper, we introduce an extension of the orthogonal matching pursuit (gOMP) for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred to as generalized OMP (gOMP), is literally a generalization of the OMP in the sense that multiple indices are identified per iteration. Owing to the selection of multiple "correct" indices, the gOMP algorithm is finished with much smaller number of iterations compared to the OMP. We show that the gOMP can perfectly reconstruct any $K$-sparse signals ($K > 1$), provided that the sensing matrix satisfies the RIP with $\\delta_{NK} < \\frac{\\sqrt{N}}{\\sqrt{K} + 2 \\sqrt{N}}$. We also demonstrate by empirical simulations that the gOMP has excellent recovery performance comparable to $\\ell_1$-minimization technique with fast processing speed and competitive computational com...

  16. Pattern Matching in Multiple Streams

    CERN Document Server

    Clifford, Raphael; Porat, Ely; Sach, Benjamin

    2012-01-01

    We investigate the problem of deterministic pattern matching in multiple streams. In this model, one symbol arrives at a time and is associated with one of s streaming texts. The task at each time step is to report if there is a new match between a fixed pattern of length m and a newly updated stream. As is usual in the streaming context, the goal is to use as little space as possible while still reporting matches quickly. We give almost matching upper and lower space bounds for three distinct pattern matching problems. For exact matching we show that the problem can be solved in constant time per arriving symbol and O(m+s) words of space. For the k-mismatch and k-differences problems we give O(k) time solutions that require O(m+ks) words of space. In all three cases we also give space lower bounds which show our methods are optimal up to a single logarithmic factor. Finally we set out a number of open problems related to this new model for pattern matching.

  17. REDUCING UNCERTAINTIES IN MODEL PREDICTIONS VIA HISTORY MATCHING OF CO2 MIGRATION AND REACTIVE TRANSPORT MODELING OF CO2 FATE AT THE SLEIPNER PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Chen

    2015-03-31

    An important question for the Carbon Capture, Storage, and Utility program is “can we adequately predict the CO2 plume migration?” For tracking CO2 plume development, the Sleipner project in the Norwegian North Sea provides more time-lapse seismic monitoring data than any other sites, but significant uncertainties still exist for some of the reservoir parameters. In Part I, we assessed model uncertainties by applying two multi-phase compositional simulators to the Sleipner Benchmark model for the uppermost layer (Layer 9) of the Utsira Sand and calibrated our model against the time-lapsed seismic monitoring data for the site from 1999 to 2010. Approximate match with the observed plume was achieved by introducing lateral permeability anisotropy, adding CH4 into the CO2 stream, and adjusting the reservoir temperatures. Model-predicted gas saturation, CO2 accumulation thickness, and CO2 solubility in brine—none were used as calibration metrics—were all comparable with the interpretations of the seismic data in the literature. In Part II & III, we evaluated the uncertainties of predicted long-term CO2 fate up to 10,000 years, due to uncertain reaction kinetics. Under four scenarios of the kinetic rate laws, the temporal and spatial evolution of CO2 partitioning into the four trapping mechanisms (hydrodynamic/structural, solubility, residual/capillary, and mineral) was simulated with ToughReact, taking into account the CO2-brine-rock reactions and the multi-phase reactive flow and mass transport. Modeling results show that different rate laws for mineral dissolution and precipitation reactions resulted in different predicted amounts of trapped CO2 by carbonate minerals, with scenarios of the conventional linear rate law for feldspar dissolution having twice as much mineral trapping (21% of the injected CO2) as scenarios with a Burch-type or Alekseyev et al.–type rate law for feldspar dissolution (11%). So far, most reactive transport modeling (RTM) studies for

  18. MATCHING IN INFORMAL FINANCIAL INSTITUTIONS.

    Science.gov (United States)

    Eeckhout, Jan; Munshi, Kaivan

    2010-09-01

    This paper analyzes an informal financial institution that brings heterogeneous agents together in groups. We analyze decentralized matching into these groups, and the equilibrium composition of participants that consequently arises. We find that participants sort remarkably well across the competing groups, and that they re-sort immediately following an unexpected exogenous regulatory change. These findings suggest that the competitive matching model might have applicability and bite in other settings where matching is an important equilibrium phenomenon. (JEL: O12, O17, G20, D40).

  19. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    Science.gov (United States)

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  20. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  1. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  2. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  3. Optimal damping ratios of multi-axial perfectly matched layers for elastic-wave modeling in general anisotropic media

    CERN Document Server

    Gao, Kai

    2016-01-01

    The conventional Perfectly Matched Layer (PML) is unstable for certain kinds of anisotropic media. This instability is intrinsic and independent of PML formulation or implementation. The Multi-axial PML (MPML) removes such instability using a nonzero damping coefficient in the direction parallel with the interface between a PML and the investigated domain. The damping ratio of MPML is the ratio between the damping coefficients along the directions parallel with and perpendicular to the interface between a PML and the investigated domain. No quantitative approach is available for obtaining these damping ratios for general anisotropic media. We develop a quantitative approach to determining optimal damping ratios to not only stabilize PMLs, but also minimize the artificial reflections from MPMLs. Numerical tests based on finite-difference method show that our new method can effectively provide a set of optimal MPML damping ratios for elastic-wave propagation in 2D and 3D general anisotropic media.

  4. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  5. Establishment of C6 brain glioma models through stereotactic technique for laser interstitial thermotherapy research

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2015-01-01

    Conclusion: The rat C6 brain glioma model established in the study was a perfect model to study LITT of glioma. Infrared thermograph technique measured temperature conveniently and effectively. The technique is noninvasive, and the obtained data could be further processed using software used in LITT research. To measure deep-tissue temperature, combining thermocouple with infrared thermograph technique would present better results.

  6. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  7. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  8. New Developments and Techniques in Structural Equation Modeling

    CERN Document Server

    Marcoulides, George A

    2001-01-01

    Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent varia

  9. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  10. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...

  11. Statistics of polarisation matching

    NARCIS (Netherlands)

    Naus, H.W.L.; Zwamborn, A.P.M.

    2014-01-01

    The reception of electromagnetic signals depends on the polarisation matching of the transmitting and receiving antenna. The practical matching differs from the theoretical one because of the noise deterioration of the transmitted and eventually received electromagnetic field. In other applications,

  12. Understanding Y haplotype matching probability.

    Science.gov (United States)

    Brenner, Charles H

    2014-01-01

    The Y haplotype population-genetic terrain is better explored from a fresh perspective rather than by analogy with the more familiar autosomal ideas. For haplotype matching probabilities, versus for autosomal matching probabilities, explicit attention to modelling - such as how evolution got us where we are - is much more important while consideration of population frequency is much less so. This paper explores, extends, and explains some of the concepts of "Fundamental problem of forensic mathematics - the evidential strength of a rare haplotype match". That earlier paper presented and validated a "kappa method" formula for the evidential strength when a suspect matches a previously unseen haplotype (such as a Y-haplotype) at the crime scene. Mathematical implications of the kappa method are intuitive and reasonable. Suspicions to the contrary raised in rest on elementary errors. Critical to deriving the kappa method or any sensible evidential calculation is understanding that thinking about haplotype population frequency is a red herring; the pivotal question is one of matching probability. But confusion between the two is unfortunately institutionalized in much of the forensic world. Examples make clear why (matching) probability is not (population) frequency and why uncertainty intervals on matching probabilities are merely confused thinking. Forensic matching calculations should be based on a model, on stipulated premises. The model inevitably only approximates reality, and any error in the results comes only from error in the model, the inexactness of the approximation. Sampling variation does not measure that inexactness and hence is not helpful in explaining evidence and is in fact an impediment. Alternative haplotype matching probability approaches that various authors have considered are reviewed. Some are based on no model and cannot be taken seriously. For the others, some evaluation of the models is discussed. Recent evidence supports the adequacy of

  13. Analysis of strictly bound modes in photonic crystal fibers by use of a source-model technique.

    Science.gov (United States)

    Hochman, Amit; Leviatan, Yehuda

    2004-06-01

    We describe a source-model technique for the analysis of the strictly bound modes propagating in photonic crystal fibers that have a finite photonic bandgap crystal cladding and are surrounded by an air jacket. In this model the field is simulated by a superposition of fields of fictitious electric and magnetic current filaments, suitably placed near the media interfaces of the fiber. A simple point-matching procedure is subsequently used to enforce the continuity conditions across the interfaces, leading to a homogeneous matrix equation. Nontrivial solutions to this equation yield the mode field patterns and propagation constants. As an example, we analyze a hollow-core photonic crystal fiber. Symmetry characteristics of the modes are discussed and exploited to reduce the computational burden.

  14. Matching Through Position Auctions

    OpenAIRE

    Terence Johnson

    2009-01-01

    This paper studies how an intermediary should design two-sided matching markets when agents are privately informed about their quality as a partner and can make payments to the intermediary. Using a mechanism design approach, I derive sufficient conditions for assortative matching to be profit- or welfare-maximizing, and then show how to implement the optimal match and payments through two-sided position auctions. This sharpens our understanding of intermediated matching markets by clarifying...

  15. Matched-Comparative Modeling of Normal and Diseased Human Airway Responses Using a Microengineered Breathing Lung Chip.

    Science.gov (United States)

    Benam, Kambez H; Novak, Richard; Nawroth, Janna; Hirano-Kobayashi, Mariko; Ferrante, Thomas C; Choe, Youngjae; Prantil-Baun, Rachelle; Weaver, James C; Bahinski, Anthony; Parker, Kevin K; Ingber, Donald E

    2016-11-23

    Smoking represents a major risk factor for chronic obstructive pulmonary disease (COPD), but it is difficult to characterize smoke-induced injury responses under physiological breathing conditions in humans due to patient-to-patient variability. Here, we show that a small airway-on-a-chip device lined by living human bronchiolar epithelium from normal or COPD patients can be connected to an instrument that "breathes" whole cigarette smoke in and out of the chips to study smoke-induced pathophysiology in vitro. This technology enables true matched comparisons of biological responses by culturing cells from the same individual with or without smoke exposure. These studies led to identification of ciliary micropathologies, COPD-specific molecular signatures, and epithelial responses to smoke generated by electronic cigarettes. The smoking airway-on-a-chip represents a tool to study normal and disease-specific responses of the human lung to inhaled smoke across molecular, cellular and tissue-level responses in an organ-relevant context. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Improved bounds for stochastic matching

    CERN Document Server

    Li, Jian

    2010-01-01

    In this paper we study stochastic matching problems that are motivated by applications in online dating and kidney exchange programs. We consider two probing models: edge probing and matching probing. Our main result is an algorithm that finds a matching-probing strategy attaining a small constant approximation ratio. An interesting aspect of our approach is that we compare the cost our solution to the best edge-probing strategy. Thus, we indirectly show that the best matching-probing strategy is only a constant factor away from the best edge-probing strategy. Even though our algorithm has a slightly worse approximation ratio than a greedy algorithm for edge-probing strategies, we show that the two algorithms can be combined to get improved approximations.

  17. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  18. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  19. Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array

    Directory of Open Access Journals (Sweden)

    Park Sungchan

    2011-01-01

    Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.

  20. Modeling seismic wave propagation across the European plate: structural models and numerical techniques, state-of-the-art and prospects

    Science.gov (United States)

    Morelli, Andrea; Danecek, Peter; Molinari, Irene; Postpischl, Luca; Schivardi, Renata; Serretti, Paola; Tondi, Maria Rosaria

    2010-05-01

    beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We validate this new model through comparison of recorded seismograms with simulations based on numerical codes (SPECFEM3D). To ease and increase model usage, we also propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages, and provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV. In the next decade seismologists will be able to reap new possibilities offered by exciting progress in general computing power and algorithmic development in computational seismology. Structural models, still based on classical approaches and modeling just few parameters in each seismogram, will benefit from emerging techniques - such as full waveform fitting and fully nonlinear inversion - that are now just showing their potential. This will require extensive availability of supercomputing resources to earth scientists in Europe, as a tool to match the planned new massive data flow. We need to make sure that the whole apparatus, needed to fully exploit new data, will be widely accessible. To maximize the development, so as for instance to enable us to promptly model ground shaking after a major earthquake, we will also need a better coordination framework, that will enable us to share and amalgamate the abundant local information on earth structure - most often available but difficult to retrieve, merge and use. Comprehensive knowledge of earth structure and of best practices to model wave propagation can by all means be considered an enabling technology for further geophysical progress.

  1. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  2. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  3. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  4. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    Science.gov (United States)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  5. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  6. Modeling correlated nakagami-MIMO channel based on 2D rank matching%相关Nakagami-MIMO信道二维秩匹配模型

    Institute of Scientific and Technical Information of China (English)

    陈学强; 王成华; 朱秋明; 陈超

    2013-01-01

    针对现有Nakagami-MIMO信道仿真方法非常复杂的问题,提出了一种基于二维秩匹配的Nakagami-MIMO空时相关衰落仿真模型.该模型首先利用高效舍弃法产生多支路独立Nakagami随机过程,然后基于二维秩匹配技术引入各子信道空时相关性且保持子信道原有统计特性不变.仿真结果表明,该模型输出各子信道的衰落包络分布和空时相关系数等统计特性均与理论结果吻合,可应用于任意衰落参数和空时相关特性的Nakagami-MIMO信道仿真场合.%The traditional Nakagami-MIMO(multiple-input & multiple-output) channel simulation methods are very complex,and a novel simulator for spatial and temporal correlated Nakagami-MIMO fading channel based on 2D rank matching technique was proposed.Firstly,a high efficient rejection method was used to generate several Nakagami random processes as sub-branches of the MIMO channel.Then,the spatial and temporal correlation between each sub-channels were introduced by a new two-dimensional rank matching technique,which keeps the statistical properties of Nakagami fading unchanged.Simulation results show that the new simulator agrees well with the theoretical results on envelope distribution and spatialtemporal correlation,which can be applied to simulating the Nakagami-MIMO channel with arbitrary fading parameters and correlation features.

  7. A survey and categorization of ontology-matching cases

    NARCIS (Netherlands)

    Aleksovski, Z.; Hage, W.R. van; Isaac, A.

    2007-01-01

    Methodologies to find and evaluate solutions for ontology matching should be centered on the practical problems to be solved. In this paper we look at matching from the perspective of a practitioner in search of matching techniques or tools. We survey actual matching use cases, and derive general ca

  8. Best matching theory & applications

    CERN Document Server

    Moghaddam, Mohsen

    2017-01-01

    Mismatch or best match? This book demonstrates that best matching of individual entities to each other is essential to ensure smooth conduct and successful competitiveness in any distributed system, natural and artificial. Interactions must be optimized through best matching in planning and scheduling, enterprise network design, transportation and construction planning, recruitment, problem solving, selective assembly, team formation, sensor network design, and more. Fundamentals of best matching in distributed and collaborative systems are explained by providing: § Methodical analysis of various multidimensional best matching processes § Comprehensive taxonomy, comparing different best matching problems and processes § Systematic identification of systems’ hierarchy, nature of interactions, and distribution of decision-making and control functions § Practical formulation of solutions based on a library of best matching algorithms and protocols, ready for direct applications and apps development. Design...

  9. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine;

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has pr...... in order to improve the pattern reproducibility while maintaining the efficiency of the sequential Gibbs sampling strategy. We compare realizations of three types of a priori models. Finally, the results are exemplified through crosshole travel time tomography....

  10. PENGARUH PENGGUNAAN MODEL COOPERATIVE LEARNING TYPE MAKE-A-MATCH DENGAN MEDIA EVIDENCE CARD TERHADAP HASIL BELAJAR KIMIA MATERI ASAM-BASA

    Directory of Open Access Journals (Sweden)

    D. Noviyanti

    2016-07-01

    Full Text Available Penelitian ini bertujuan untuk mengetahui pengaruh penggunaan model cooperative learning type make-a-match dengan media evidence card terhadap hasil belajar kimia materi larutan asam-basa. Populasi penelitian adalah siswa kelas XI IPA. Pengambilan sampel dilakukan dengan sampling jenuh dengan XIA1 sebagai kelas kontrol dan XIA2 sebagai kelas eksperimen. Berdasarkan uji t dua pihak dihasilkan thitung(4,0293> t1abe1(1,9925 yang berarti ada perbedaan yang signifikan, sedangkan uji t satu pihak kanan thitun9(4,0293>t1abe1(1,9925 yang berarti rata-rata hasil belajar kognitif kelas eksperimen lebih baik daripada kelas kontrol. N-gain kelas eksperimen (0,71 lebih baik daripada kelas kontrol (0,52. Hal ini menunjukkan bahwa penggunakan model Coperative Learning type make-a-match dengan media evidence card mempengaruhi hasil belajar kimia materi larutan asam-basa sebesar 28,99% sehingga guru hendaknya berupaya menerapkan model pembelajaran ini pada materi lain.The study was aimed to determine the effect of the use of cooperative learning model of type make-a-card match with the evidence card media on chemistry learning outcomes of acid-base solution. The population was students of Science class XI. Sampling was performed by saturated sampling with XIA 1 as the control and XIA2 as the experiment class. The two sides t test produced tcount (4.0293> Trable (1.9925 which means there was a significant difference, while the one right side t test produced tcount (4.0293> T Table (1.9925, which means that the average cognitive learning outcome of the experimental class was better than that of the control class. N-gain of the experimental class (0.71 was better than that of the control class (0.52. This showed that the use of this model affected the learning outcomes of acid-base solution of 28.99%, so teachers should try to apply this learning model on another matter.

  11. Towards an Integrative Model of Sexual Harassment: An Examination of Power, Attitudes, Gender/Role Match, and Some Interactions.

    Science.gov (United States)

    2007-11-02

    sexism , that does not reach the threshold of sexual harassment. Interactions of Predictor Variables A criticism of the models of sexual harassment...Hstance Low II Simple sexism III Least likely environment for sexual harassment FIGURE lb A MODEL OF SEXUAL HARASSMENT BEHAVIORS FOR WOMEN...environment would most likely be a form of sexism not generally classified as harassment. In the low power distance situation, supervisors and subordinates

  12. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  13. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  14. No benefit from chronic lithium dosing in a sibling-matched, gender balanced, investigator-blinded trial using a standard mouse model of familial ALS.

    Directory of Open Access Journals (Sweden)

    Alan Gill

    Full Text Available BACKGROUND: In any animal model of human disease a positive control therapy that demonstrates efficacy in both the animal model and the human disease can validate the application of that animal model to the discovery of new therapeutics. Such a therapy has recently been reported by Fornai et al. using chronic lithium carbonate treatment and showing therapeutic efficacy in both the high-copy SOD1G93A mouse model of familial amyotrophic lateral sclerosis (ALS, and in human ALS patients. METHODOLOGY/PRINCIPAL FINDINGS: Seeking to verify this positive control therapy, we tested chronic lithium dosing in a sibling-matched, gender balanced, investigator-blinded trial using the high-copy (average 23 copies SOD1G93A mouse (n = 27-28/group. Lithium-treated mice received single daily 36.9 mg/kg i.p. injections from 50 days of age through death. This dose delivered 1 mEq/kg (6.94 mg/kg/day lithium ions. Neurological disease severity score and body weight were determined daily during the dosing period. Age at onset of definitive disease and survival duration were recorded. Summary measures from individual body weight changes and neurological score progression, age at disease onset, and age at death were compared using Kaplan-Meier and Cox proportional hazards analysis. Our study did not show lithium efficacy by any measure. CONCLUSIONS/SIGNIFICANCE: Rigorous survival study design that includes sibling matching, gender balancing, investigator blinding, and transgene copy number verification for each experimental subject minimized the likelihood of attaining a false positive therapeutic effect in this standard animal model of familial ALS. Results from this study do not support taking lithium carbonate into human clinical trials for ALS.

  15. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  16. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  17. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  18. An Implementation of the Frequency Matching Method

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer

    During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One of these......During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One...... of these methods is the recently proposed Frequency Matching method to compute the maximum a posteriori model of an inverse problem where multiple-point statistics, learned from a training image, is used to formulate a closed form expression for an a priori probability density function. This paper discusses...... aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method...

  19. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  20. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  1. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  2. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  3. Sequence Matching Analysis for Curriculum Development

    National Research Council Canada - National Science Library

    Liem Yenny Bendatu; Bernardo Nugroho Yahya

    2015-01-01

    .... This study attempts to develop a sequence matching analysis. Considering conformance checking as the basis of this approach, this proposed approach utilizes the current control flow technique in process mining domain...

  4. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  5. Template Matching on Parallel Architectures,

    Science.gov (United States)

    1985-07-01

    memory. The processors run asynchronously. Thus according to Hynn’s categories the Butterfl . is a MIMD machine. The processors of the Butterfly are...Generalized Butterfly Architecture This section describes timings for pattern matching on the generalized Butterfl .. Ihe implementations on the Butterfly...these algorithms. Thus the best implementation of the techniques on the generalized Butterfl % are the same as the implementation on the real Butterfly

  6. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  7. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    Science.gov (United States)

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  8. Wave Propagation in Fluids Models and Numerical Techniques

    CERN Document Server

    Guinot, Vincent

    2007-01-01

    This book presents the physical principles of wave propagation in fluid mechanics and hydraulics. The mathematical techniques that allow the behavior of the waves to be analyzed are presented, along with existing numerical methods for the simulation of wave propagation. Particular attention is paid to discontinuous flows, such as steep fronts and shock waves, and their mathematical treatment. A number of practical examples are taken from various areas fluid mechanics and hydraulics, such as contaminant transport, the motion of immiscible hydrocarbons in aquifers, river flow, pipe transients an

  9. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  10. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  11. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  12. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  13. Uncertain Schema Matching

    CERN Document Server

    Gal, Avigdor

    2011-01-01

    Schema matching is the task of providing correspondences between concepts describing the meaning of data in various heterogeneous, distributed data sources. Schema matching is one of the basic operations required by the process of data and schema integration, and thus has a great effect on its outcomes, whether these involve targeted content delivery, view integration, database integration, query rewriting over heterogeneous sources, duplicate data elimination, or automatic streamlining of workflow activities that involve heterogeneous data sources. Although schema matching research has been o

  14. New Cosmic Center Universe Model Matches Eight of Big Bang's Major Predictions Without The F-L Paradigm

    CERN Document Server

    Gentry, R V

    2003-01-01

    Accompanying disproof of the F-L expansion paradigm eliminates the basis for expansion redshifts, which in turn eliminates the basis for the Cosmological Principle. The universe is not the same everywhere. Instead the spherical symmetry of the cosmos demanded by the Hubble redshift relation proves the universe is isotropic about a nearby universal Center. This is the foundation of the relatively new Cosmic Center Universe (CCU) model, which accounts for, explains, or predicts: (i) The Hubble redshift relation, (ii) a CBR redshift relation that fits all current CBR measurements, (iii) the recently discovered velocity dipole distribution of radiogalaxies, (iv) the well-known time dilation of SNeIa light curves, (v) the Sunyaev-Zeldovich thermal effect, (vi) Olber's paradox, (vii) SN dimming for z 1 an enhanced brightness that fits SN 1997ff measurements, (ix) the existence of extreme redshift (z > 10) objects which, when observed, will further distinguish it from the big bang. The CCU model also plausibly expl...

  15. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  16. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  17. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  18. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  19. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  20. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.