WorldWideScience

Sample records for model matching techniques

  1. Two-dimensional gel electrophoresis image registration using block-matching techniques and deformation models.

    Science.gov (United States)

    Rodriguez, Alvaro; Fernandez-Lozano, Carlos; Dorado, Julian; Rabuñal, Juan R

    2014-06-01

    Block-matching techniques have been widely used in the task of estimating displacement in medical images, and they represent the best approach in scenes with deformable structures such as tissues, fluids, and gels. In this article, a new iterative block-matching technique-based on successive deformation, search, fitting, filtering, and interpolation stages-is proposed to measure elastic displacements in two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) images. The proposed technique uses different deformation models in the task of correlating proteins in real 2D electrophoresis gel images, obtaining an accuracy of 96.6% and improving the results obtained with other techniques. This technique represents a general solution, being easy to adapt to different 2D deformable cases and providing an experimental reference for block-matching algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. 3D modelling of trompe l'oeil decorated vaults using dense matching techniques

    Science.gov (United States)

    Chiabrando, F.; Lingua, A.; Noardo, F.; Spano, A.

    2014-05-01

    Dense matching techniques, implemented in many commercial and open source software, are useful instruments for carrying out a rapid and detailed analysis of complex objects, including various types of details and surfaces. For this reason these tools were tested in the metric survey of a frescoed ceiling in the hall of honour of a baroque building. The surfaces are covered with trompe-l'oeil paintings which theoretically can give a very good texture to automatic matching algorithms but in this case problems arise when attempting to reconstruct the correct geometry: in fact, in correspondence with the main architectonic painted details, the models present some irregularities, unexpectedly coherent with the painted drawing. The photogrammetric models have been compared with data deriving from a LIDAR survey of the same object, to evaluate the entity of this blunder: some profiles of selected sections have been extracted, verifying the different behaviours of the software tools.

  3. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  4. Convolution and non convolution Perfectly Matched Layer techniques optimized at grazing incidence for high-order wave propagation modelling

    Science.gov (United States)

    Martin, Roland; Komatitsch, Dimitri; Bruthiaux, Emilien; Gedney, Stephen D.

    2010-05-01

    We present and discuss here two different unsplit formulations of the frequency shift PML based on convolution or non convolution integrations of auxiliary memory variables. Indeed, the Perfectly Matched Layer absorbing boundary condition has proven to be very efficient from a numerical point of view for the elastic wave equation to absorb both body waves with non-grazing incidence and surface waves. However, at grazing incidence the classical discrete Perfectly Matched Layer method suffers from large spurious reflections that make it less efficient for instance in the case of very thin mesh slices, in the case of sources located very close to the edge of the mesh, and/or in the case of receivers located at very large offset. In [1] we improve the Perfectly Matched Layer at grazing incidence for the seismic wave equation based on an unsplit convolution technique. This improved PML has a cost that is similar in terms of memory storage to that of the classical PML. We illustrate the efficiency of this improved Convolutional Perfectly Matched Layer based on numerical benchmarks using a staggered finite-difference method on a very thin mesh slice for an isotropic material and show that results are significantly improved compared with the classical Perfectly Matched Layer technique. We also show that, as the classical model, the technique is intrinsically unstable in the case of some anisotropic materials. In this case, retaining an idea of [2], this has been stabilized by adding correction terms adequately along any coordinate axis [3]. More specifically this has been applied to the spectral-element method based on a hybrid first/second order time integration scheme in which the Newmark time marching scheme allows us to match perfectly at the base of the absorbing layer a velocity-stress formulation in the PML and a second order displacement formulation in the inner computational domain.Our CPML unsplit formulation has the advantage to reduce the memory storage of CPML

  5. PATTERN MATCHING IN MODELS

    Directory of Open Access Journals (Sweden)

    Cristian GEORGESCU

    2005-01-01

    Full Text Available The goal of this paper is to investigate how such a pattern matching could be performed on models,including the definition of the input language as well as the elaboration of efficient matchingalgorithms. Design patterns can be considered reusable micro-architectures that contribute to anoverall system architecture. Frameworks are also closely related to design patterns. Componentsoffer the possibility to radically change the behaviors and services offered by an application bysubstitution or addition of new components, even a long time after deployment. Software testing isanother aspect of reliable development. Testing activities mainly consist in ensuring that a systemimplementation conforms to its specifications.

  6. [Establishment of beta block matching technique].

    Science.gov (United States)

    Zhu, Fa-Ming; Lü, Qin-Feng; Zhang, Wei; Zhang, Hai-Qin; Fu, Qi-Hua; Yan, Li-Xing

    2005-10-01

    The purpose of this study was to establish beta block matching technique. DNA was extracted from whole blood by salting-out method, beta block matching was performed by PCR and GeneScan technique. The results showed that the length of fragments amplificated in 100 samples was different and the range of them was 91-197 bp. Amplification fragments could be divided into four regions: 91-93, 105-113, 125-139 and 177-197 bp respectively. 91 bp DNA fragments could be found in all of samples. The numbers of DNA fragments with different length have been shown high polymorphism and they focused on the range of seven to twenty four. In conclusion, the beta block matching technique is reliable and applicable to the selection of hematopoietic stem cell transplantation donors.

  7. Transverse Matching Techniques for the SNS Linac

    CERN Document Server

    Jeon Dong Oh; Danilov, Viatcheslav V

    2005-01-01

    It is crucial to minimize beam loss and machine activation by obtaining optimal transverse matching for a high-intensity linear accelerator such as the Spallation Neutron Source linac. For matching the Drift Tube Linac (DTL) to Coupled Cavity Linac (CCL), there are four wire-scanners installed in series in CCL module 1 as proposed by the author.* A series of measurements was conducted to minimize envelope breathing and the results are presented here. As an independent approach, Chu et al is developing an application based on another technique by estimating rms emittance using the wire scanner profile data.** For matching the Medium Energy Beam Transport Line to the DTL, a technique of minimizing rms emittance was used and emittance data show that tail is minimized as well.

  8. [Establishment of delta block matching technique].

    Science.gov (United States)

    Lü, Qin-Feng; Zhang, Wei; Zhu, Fa-Ming; Yan, Li-Xing

    2006-04-01

    To establish delta block HLA-matching technique, DNA was extracted from whole blood by salting-out method, delta block was amplified by polymerase chain reaction (PCR), and PCR product was detected by GeneScan. The results showed that delta block had polymorphism in 104 samples without sibship of the Han people from Zhejiang province. The range of DNA fragment length was 81-393 bp and could be divided into 4 groups: 81-118 bp, 140-175 bp, 217-301 bp, 340-393 bp. The numbers of DNA fragments were 6-32. It is concluded that the method of delta block matching is reliable and can be applied to select donors for the patients to be transplanted. It is the first time to get delta block data of the Han people in China.

  9. Techniques Used in String Matching for Network Security

    OpenAIRE

    Jamuna Bhandari

    2014-01-01

    String matching also known as pattern matching is one of primary concept for network security. In this area the effectiveness and efficiency of string matching algorithms is important for applications in network security such as network intrusion detection, virus detection, signature matching and web content filtering system. This paper presents brief review on some of string matching techniques used for network security.

  10. Role model and prototype matching

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics). Consequently, many recruitment initiatives include role models to challenge these prototypes......’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype...... images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did...

  11. Weed Identification Using An Automated Active Shape Matching (AASM) Technique

    DEFF Research Database (Denmark)

    Swain, K C; Nørremark, Michael; Jørgensen, R N

    2011-01-01

    on the concept of ‘active shape modelling’ to identify weed and crop plants based on their morphology. The automated active shape matching system (AASM) technique consisted of, i) a Pixelink camera ii) an LTI (Lehrstuhlfuer technische informatik) image processing library, iii) a laptop pc with the Linux OS. A 2......-leaf growth stage model for Solanum nigrum L. (nightshade) is generated from 32 segmented training images in Matlab software environment. Using the AASM algorithm, the leaf model was aligned and placed at the centre of the target plant and a model deformation process carried out. The parameters used......Weed identification and control is a challenge for intercultural operations in agriculture. As an alternative to chemical pest control, a smart weed identification technique followed by mechanical weed control system could be developed. The proposed smart identification technique works...

  12. Parikh Matching in the Streaming Model

    DEFF Research Database (Denmark)

    Lee, Lap-Kei; Lewenstein, Moshe; Zhang, Qin

    2012-01-01

    |-length count vector. In the streaming model one seeks space-efficient algorithms for problems in which there is one pass over the data. We consider Parikh matching in the streaming model. To make this viable we search for substrings whose Parikh-mappings approximately match the input vector. In this paper we...... present upper and lower bounds on the problem of approximate Parikh matching in the streaming model....

  13. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  14. Technique to match mantle and para-aortic fields

    International Nuclear Information System (INIS)

    Lutz, W.R.; Larsen, R.D.

    1983-01-01

    A technique is described to match the mantle and para-aortic fields used in treatment of Hodgkin's disease, when the patient is treated alternately in supine and prone position. The approach is based on referencing the field edges to a point close to the vertebral column, where uncontrolled motion is minimal and where accurate matching is particularly important. Fiducial surface points are established in the simulation process to accomplish the objective. Dose distributions have been measured to study the combined effect of divergence differences, changes in body angulation and setup errors. Even with the most careful technique, the use of small cord blocks of 50% transmission is an advisable precaution for the posterior fields

  15. History Matching: Towards Geologically Reasonable Models

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Cordua, Knud Skou; Mosegaard, Klaus

    that measures similarity between statistics of a training image and statistics of any smooth model is introduced and its analytical gradient is computed. This allows us to apply any gradientbased method to history matching problem and guide a solution until it satisfies both production data and complexity......This work focuses on the development of a new method for history matching problem that through a deterministic search finds a geologically feasible solution. Complex geology is taken into account evaluating multiple point statistics from earth model prototypes - training images. Further a function...

  16. On a special case of model matching

    Czech Academy of Sciences Publication Activity Database

    Zagalak, Petr

    2004-01-01

    Roč. 77, č. 2 (2004), s. 164-172 ISSN 0020-7179 R&D Projects: GA ČR GA102/01/0608 Institutional research plan: CEZ:AV0Z1075907 Keywords : linear systems * state feedback * model matching Subject RIV: BC - Control Systems Theory Impact factor: 0.702, year: 2004

  17. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  18. An accelerated image matching technique for UAV orthoimage registration

    Science.gov (United States)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  19. Can simple population genetic models reconcile partial match ...

    Indian Academy of Sciences (India)

    A recent study of partial matches in the Arizona offender database of DNA profiles has revealed a large number of nine and ten locus matches. I use simple models that incorporate the product rule, population substructure, and relatedness to predict the expected number of matches in large databases. I find that there is a ...

  20. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique

    International Nuclear Information System (INIS)

    Song, Min Seop; Park, So Hyun; Kim, Eung Soo

    2014-01-01

    Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations

  1. Classifying variability modeling techniques

    NARCIS (Netherlands)

    Sinnema, Marco; Deelstra, Sybren

    Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The

  2. Alarm handling systems and techniques developed to match operator tasks

    International Nuclear Information System (INIS)

    Bye, A.; Moum, B.R.

    1997-01-01

    This paper covers alarm handling methods and techniques explored at the Halden Project, and describes current status on the research activities on alarm systems. Alarm systems are often designed by application of a bottom-up strategy, generating alarms at component level. If no structuring of the alarms is applied, this may result in alarm avalanches in major plant disturbances, causing cognitive overload of the operator. An alarm structuring module should be designed using a top-down approach, analysing operator's tasks, plant states, events and disturbances. One of the operator's main tasks during plant disturbances is status identification, including determination of plant status and detection of plant anomalies. The main support of this is provided through the alarm systems, the process formats, the trends and possible diagnosis systems. The alarm system should both physically and conceptually be integrated with all these systems. 9 refs, 5 figs

  3. The application of computer color matching techniques to the matching of target colors in a food substrate: a first step in the development of foods with customized appearance.

    Science.gov (United States)

    Kim, Sandra; Golding, Matt; Archer, Richard H

    2012-06-01

    A predictive color matching model based on the colorimetric technique was developed and used to calculate the concentrations of primary food dyes needed in a model food substrate to match a set of standard tile colors. This research is the first stage in the development of novel three-dimensional (3D) foods in which color images or designs can be rapidly reproduced in 3D form. Absorption coefficients were derived for each dye, from a concentration series in the model substrate, a microwave-baked cake. When used in a linear, additive blending model these coefficients were able to predict cake color from selected dye blends to within 3 ΔE*(ab,10) color difference units, or within the limit of a visually acceptable match. Absorption coefficients were converted to pseudo X₁₀, Y₁₀, and Z₁₀ tri-stimulus values (X₁₀(P), Y₁₀(P), Z₁₀(P)) for colorimetric matching. The Allen algorithm was used to calculate dye concentrations to match the X₁₀(P), Y₁₀(P), and Z₁₀(P) values of each tile color. Several recipes for each color were computed with the tile specular component included or excluded, and tested in the cake. Some tile colors proved out-of-gamut, limited by legal dye concentrations; these were scaled to within legal range. Actual differences suggest reasonable visual matches could be achieved for within-gamut tile colors. The Allen algorithm, with appropriate adjustments of concentration outputs, could provide a sufficiently rapid and accurate calculation tool for 3D color food printing. The predictive color matching approach shows potential for use in a novel embodiment of 3D food printing in which a color image or design could be rendered within a food matrix through the selective blending of primary dyes to reproduce each color element. The on-demand nature of this food application requires rapid color outputs which could be provided by the color matching technique, currently used in nonfood industries, rather than by empirical food

  4. Model Reduction by Moment Matching for Linear Switched Systems

    DEFF Research Database (Denmark)

    Bastug, Mert; Petreczky, Mihaly; Wisniewski, Rafal

    2014-01-01

    A moment-matching method for the model reduction of linear switched systems (LSSs) is developed. The method is based based upon a partial realization theory of LSSs and it is similar to the Krylov subspace methods used for moment matching for linear systems. The results are illustrated by numerical...

  5. Using visual analytics model for pattern matching in surveillance data

    Science.gov (United States)

    Habibi, Mohammad S.

    2013-03-01

    In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.

  6. Money creation in a random matching model

    OpenAIRE

    Alexei Deviatov

    2004-01-01

    I study money creation in versions of the Trejos-Wright (1995) and Shi (1995) models with indivisible money and individual holdings bounded at two units. I work with the same class of policies as in Deviatov and Wallace (2001), who study money creation in that model. However, I consider an alternative notion of implementability–the ex ante pairwise core. I compute a set of numerical examples to determine whether money creation is beneficial. I find beneficial e?ects of money creation if indiv...

  7. Mortality, Hospitalization, and Technique Failure in Daily Home Hemodialysis and Matched Peritoneal Dialysis Patients: A Matched Cohort Study.

    Science.gov (United States)

    Weinhandl, Eric D; Gilbertson, David T; Collins, Allan J

    2016-01-01

    Use of home dialysis is growing in the United States, but few direct comparisons of major clinical outcomes on daily home hemodialysis (HHD) versus peritoneal dialysis (PD) exist. Matched cohort study. We matched 4,201 new HHD patients in 2007 to 2010 with 4,201 new PD patients from the US Renal Data System database. Daily HHD versus PD. Relative mortality, hospitalization, and technique failure. Mean time from end-stage renal disease onset to home dialysis therapy initiation was 44.6 months for HHD and 44.3 months for PD patients. In intention-to-treat analysis, HHD was associated with 20% lower risk for all-cause mortality (HR, 0.80; 95% CI, 0.73-0.87), 8% lower risk for all-cause hospitalization (HR, 0.92; 95% CI, 0.89-0.95), and 37% lower risk for technique failure (HR, 0.63; 95% CI, 0.58-0.68), all relative to PD. In the subset of 1,368 patients who initiated home dialysis therapy within 6 months of end-stage renal disease onset, HHD was associated with similar risk for all-cause mortality (HR, 0.95; 95% CI, 0.80-1.13), similar risk for all-cause hospitalization (HR, 0.96; 95% CI, 0.88-1.05), and 30% lower risk for technique failure (HR, 0.70; 95% CI, 0.60-0.82). Regarding hospitalization, risk comparisons favored HHD for cardiovascular disease and dialysis access infection and PD for bloodstream infection. Matching unlikely to reduce confounding attributable to unmeasured factors, including residual kidney function; lack of data regarding dialysis frequency, duration, and dose in daily HHD patients and frequency and solution in PD patients; diagnosis codes used to classify admissions. These data suggest that relative to PD, daily HHD is associated with decreased mortality, hospitalization, and technique failure. However, risks for mortality and hospitalization were similar with these modalities in new dialysis patients. The interaction between modality and end-stage renal disease duration at home dialysis therapy initiation should be investigated further

  8. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    International Nuclear Information System (INIS)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-01-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design

  9. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Science.gov (United States)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-12-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT

  10. Thermal modeling and optimization of a thermally matched energy harvester

    Science.gov (United States)

    Boughaleb, J.; Arnaud, A.; Cottinet, P. J.; Monfray, S.; Gelenne, P.; Kermel, P.; Quenard, S.; Boeuf, F.; Guyomar, D.; Skotnicki, T.

    2015-08-01

    The interest in energy harvesting devices has grown with the development of wireless sensors requiring small amounts of energy to function. The present article addresses the thermal investigation of a coupled piezoelectric and bimetal-based heat engine. The thermal energy harvester in question converts low-grade heat flows into electrical charges by achieving a two-step conversion mechanism for which the key point is the ability to maintain a significant thermal gradient without any heat sink. Many studies have previously focused on the electrical properties of this innovative device for energy harvesting but until now, no thermal modeling has been able to describe the device specificities or improve its thermal performances. The research reported in this paper focuses on the modeling of the harvester using an equivalent electrical circuit approach. It is shown that the knowledge of the thermal properties inside the device and a good comprehension of its heat exchange with the surrounding play a key role in the optimization procedure. To validate the thermal modeling, finite element analyses as well as experimental measurements on a hot plate were carried out and the techniques were compared. The proposed model provided a practical guideline for improving the generator design to obtain a thermally matched energy harvester that can function over a wide range of hot source temperatures for the same bimetal. A direct application of this study has been implemented on scaled structures to maintain an important temperature difference between the cold surface and the hot reservoir. Using the equations of the thermal model, predictions of the thermal properties were evaluated depending on the scaling factor and solutions for future thermal improvements are presented.

  11. An analysis of matching cognitive-behavior therapy techniques to learning styles.

    Science.gov (United States)

    van Doorn, Karlijn; McManus, Freda; Yiend, Jenny

    2012-12-01

    To optimize the effectiveness of cognitive-behavior therapy (CBT) for each individual patient, it is important to discern whether different intervention techniques may be differentially effective. One factor influencing the differential effectiveness of CBT intervention techniques may be the patient's preferred learning style, and whether this is 'matched' to the intervention. The current study uses a retrospective analysis to examine whether the impact of two common CBT interventions (thought records and behavioral experiments) is greater when the intervention is either matched or mismatched to the individual's learning style. Results from this study give some indication that greater belief change is achieved when the intervention technique is matched to participants' learning style, than when intervention techniques are mismatched to learning style. Conclusions are limited by the retrospective nature of the analysis and the limited dose of the intervention in non-clinical participants. Results suggest that further investigation of the impact of matching the patient's learning style to CBT intervention techniques is warranted, using clinical samples with higher dose interventions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. "Living in sin" and marriage : a matching model

    NARCIS (Netherlands)

    Rao Sahib, P. Padma; Gu, X. Xinhua

    1999-01-01

    This paper develops a two sided matching model of premarital cohabitation and marriage in which premarital cohabitation serves as a period of learning. We solve for the optimal policy to be followed by individuals by treating the model as a three stage dynamic programming problem. We find that

  13. Mass-spring matching layers for high-frequency ultrasound transducers: a new technique using vacuum deposition.

    Science.gov (United States)

    Brown, Jeremy; Sharma, Srikanta; Leadbetter, Jeff; Cochran, Sandy; Adamson, Rob

    2014-11-01

    We have developed a technique of applying multiple matching layers to high-frequency (>30 MHz) imaging transducers, by using carefully controlled vacuum deposition alone. This technique uses a thin mass-spring matching layer approach that was previously described in a low-frequency (1 to 10 MHz) transducer design with epoxied layers. This mass- spring approach is more suitable to vacuum deposition in highfrequency transducers over the conventional quarter-wavelength resonant cavity approach, because thinner layers and more versatile material selection can be used, the difficulty in precisely lapping quarter-wavelength matching layers is avoided, the layers are less attenuating, and the layers can be applied to a curved surface. Two different 3-mm-diameter 45-MHz planar lithium niobate transducers and one geometrically curved 3-mm lithium niobate transducer were designed and fabricated using this matching layer approach with copper as the mass layer and parylene as the spring layer. The first planar lithium niobate transducer used a single mass-spring matching network, and the second planar lithium niobate transducer used a single mass-spring network to approximate the first layer in a dual quarter-wavelength matching layer system in addition to a conventional quarter-wavelength layer as the second matching layer. The curved lithium niobate transducer was press focused and used a similar mass-spring plus quarter-wavelength matching layer network. These transducers were then compared with identical transducers with no matching layers and the performance improvement was quantified. The bandwidth of the lithium niobate transducer with the single mass-spring layer was measured to be 46% and the insertion loss was measured to be -21.9 dB. The bandwidth and insertion loss of the lithium niobate transducer with the mass-spring network plus quarter-wavelength matching were measured to be 59% and -18.2 dB, respectively. These values were compared with the unmatched

  14. Keefektifan Model Kooperatif Tipe Make A Match dan Model CPS Terhadap Kemampuan Pemecahan Masalah dan Motivasi Belajar

    Directory of Open Access Journals (Sweden)

    Nur Fitri Amalia

    2013-12-01

    Full Text Available AbstrakTujuan penelitian ini adalah untuk mengetahui keefektifan model kooperatif tipe Make a Match dan model CPS terhadap kemampuan pemecahan masalah dan motivasi belajar sis-wa kelas X pada materi persamaan dan fungsi kuadrat. Populasi dalam penelitian ini adalah siswa kelas X SMA N 1 Subah tahun ajaran 2013/2014. Sampel dalam penelitian ini diam-bil dengan teknik random sampling, yaitu teknik pengambilan sampel dengan acak. Kelas X8 terpilih sebagai kelas eksperimen I dengan penerapan model kooperatif tipe Make a Match dan kelas X7 terpilih sebagai kelas eksperimen II dengan penerapan model CPS. Da-ta hasil penelitian diperoleh dengan tes dan pemberian angket untuk kemudian dianalisis menggunakan uji proporsi dan uji t. Hasil penelitian adalah (1 implementasi model koope-ratif tipe Make a Match efektif terhadap kemampuan pemecahan masalah; (2 implementasi model CPS efektif terhadap kemampuan pemecahan masalah; (3 implementasi model koo-peratif tipe Make a Match lebih baik daripada model CPS terhadap kemampuan pecahan masalah; (4 implementasi model CPS lebih baik daripada model kooperatif tipe Make a Match terhadap motivasi belajar.Kata Kunci:       Make A Match; CPS; Pemecahan Masalah; Motivasi  AbstractThe purpose of this study was to determine the effectiveness of cooperative models Make a Match and CPS to problem-solving ability and motivation of students of class X in the equation of matter and quadratic function. The population of this study was the tenth grade students of state senior high school 1 Subah academic year 2013/2014. The samples in this study were taken by random sampling technique, that is sampling techniques with random. Class X8 was selected as the experimental class I with the application of cooperative model make a Match and class X7 was selected as the experimental class II with the application of the CPS. The data were obtained with the administration of a questionnaire to test and then analyzed using the

  15. Equilibrium Price Dispersion in a Matching Model with Divisible Money

    NARCIS (Netherlands)

    Kamiya, K.; Sato, T.

    2002-01-01

    The main purpose of this paper is to show that, for any given parameter values, an equilibrium with dispersed prices (two-price equilibrium) exists in a simple matching model with divisible money presented by Green and Zhou (1998).We also show that our two-price equilibrium is unique in certain

  16. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form. By...... of one space robot arm system subjected to failures....

  17. Electron/photon matched field technique for treatment of orbital disease

    International Nuclear Information System (INIS)

    Arthur, Douglas W.; Zwicker, Robert D.; Garmon, Pamela W.; Huang, David T.; Schmidt-Ullrich, Rupert K.

    1997-01-01

    Purpose: A number of approaches have been described in the literature for irradiation of malignant and benign diseases of the orbit. Techniques described to date do not deliver a homogeneous dose to the orbital contents while sparing the cornea and lens of excessive dose. This is a result of the geometry encountered in this region and the fact that the target volume, which includes the periorbital and retroorbital tissues but excludes the cornea, anterior chamber, and lens, cannot be readily accommodated by photon beams alone. To improve the dose distribution for these treatments, we have developed a technique that combines a low-energy electron field carefully matched with modified photon fields to achieve acceptable dose coverage and uniformity. Methods and Materials: An anterior electron field and a lateral photon field setup is used to encompass the target volume. Modification of these fields permits accurate matching as well as conformation of the dose distribution to the orbit. A flat-surfaced wax compensator assures uniform electron penetration across the field, and a sunken lead alloy eye block prevents excessive dose to the central structures of the anterior segment. The anterior edge of the photon field is modified by broadening the penumbra using a form of pseudodynamic collimation. Direct measurements using film and ion chamber dosimetry were used to study the characteristics of the fall-off region of the electron field and the penumbra of the photon fields. >From the data collected, the technique for accurate field matching and dose uniformity was generated. Results: The isodose curves produced with this treatment technique demonstrate homogeneous dose coverage of the orbit, including the paralenticular region, and sufficient dose sparing of the anterior segment. The posterior lens accumulates less than 40% of the prescribed dose, and the lateral aspect of the lens receives less than 30%. A dose variation in the match region of ±12% is confronted when

  18. Semiconductor Modeling Techniques

    CERN Document Server

    Xavier, Marie

    2012-01-01

    This book describes the key theoretical techniques for semiconductor research to quantitatively calculate and simulate the properties. It presents particular techniques to study novel semiconductor materials, such as 2D heterostructures, quantum wires, quantum dots and nitrogen containing III-V alloys. The book is aimed primarily at newcomers working in the field of semiconductor physics to give guidance in theory and experiment. The theoretical techniques for electronic and optoelectronic devices are explained in detail.

  19. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  20. Technique to Match Gingival Shade when Using Pink Ceramics for Anterior Fixed Implant Prostheses.

    Science.gov (United States)

    Papaspyridakos, Panos; Amin, Sarah; El-Rafie, Khaled; Weber, Hans-Peter

    2018-03-01

    Use of pink gingival ceramics can reduce the necessity for extensive surgical procedures attempting to restore missing soft and hard tissues in the maxillary esthetic zone. Selecting the appropriate shade for pink porcelain poses a challenge, especially when the patient presents with a high smile line. This paper describes a simple and effective technique to facilitate shade selection for gingival ceramics to match the patient's existing gingival shade. © 2016 by the American College of Prosthodontists.

  1. Estimating a marriage matching model with spillover effects.

    Science.gov (United States)

    Choo, Eugene; Siow, Aloysius

    2006-08-01

    We use marriage matching functions to study how marital patterns change when population supplies change. Specifically, we use a behavioral marriage matching function with spillover effects to rationalize marriage and cohabitation behavior in contemporary Canada. The model can estimate a couple's systematic gains to marriage and cohabitation relative to remaining single. These gains are invariant to changes in population supplies. Instead, changes in population supplies redistribute these gains between a couple. Although the model is behavioral, it is nonparametric. It can fit any observed cross-sectional marriage matching distribution. We use the estimated model to quantify the impacts of gender differences in mortality rates and the baby boom on observed marital behavior in Canada. The higher mortality rate of men makes men scarcer than women. We show that the scarceness of men modestly reduced the welfare of women and increased the welfare of men in the marriage market. On the other hand, the baby boom increased older men's net gains to entering the marriage market and lowered middle-aged women's net gains.

  2. Template Matching of Colored Image Based on Quaternion Fourier Transform and Image Pyramid Techniques

    Directory of Open Access Journals (Sweden)

    M.I. KHALIL

    2016-04-01

    Full Text Available Template matching method is one of the most significant object recognition techniques and it has many applications in the field of digital signal processing and image processing and it is the base for object tracking in computer vision field. The traditional template matching by correlation is performed between gray template image w and the candidate gray image f where the template’s position is to be determined in the candidate image. This task can be achieved by measuring the similarity between the template image and the candidate image to identify and localize the existence of object instances within an image. When applying this method to colored image, the image must be converted to a gray one or decomposed to its RGB components to be processed separately. The current paper aims to apply the template matching technique to colored images via generating the quaternion Fourier transforms of both the template and candidate colored image and hence performing the cross-correlation between those transforms. Moreover, this approach is improved by representing both the image and template as pyramid multi-resolution format to reduce the time of processing. The proposed algorithm is implemented and applied to different images and templates using Matlab functions.

  3. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  4. Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications to Automatic Target Recognition

    National Research Council Canada - National Science Library

    Keller, Martin G

    1999-01-01

    In the fields of computational vision and image understanding, the object recognition problem can be formulated as a problem of matching a collection of model features to features extracted from an observed scene...

  5. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    Science.gov (United States)

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  6. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  7. Cross-matching: A modified cross-correlation underlying threshold energy model and match-based depth perception

    Directory of Open Access Journals (Sweden)

    Takahiro eDoi

    2014-10-01

    Full Text Available Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye’s image can be matched to multiple parts of the other eye’s image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs, because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs, in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term ‘cross-matching’, represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS; this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth, and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in

  8. MODELING CONTROLLED ASYNCHRONOUS ELECTRIC DRIVES WITH MATCHING REDUCERS AND TRANSFORMERS

    Directory of Open Access Journals (Sweden)

    V. S. Petrushin

    2015-04-01

    Full Text Available Purpose. Working out of mathematical models of the speed-controlled induction electric drives ensuring joint consideration of transformers, motors and loadings, and also matching reducers and transformers, both in static, and in dynamic regimes for the analysis of their operating characteristics. Methodology. At mathematical modelling are considered functional, mass, dimensional and cost indexes of reducers and transformers that allows observing engineering and economic aspects of speed-controlled induction electric drives. The mathematical models used for examination of the transitive electromagnetic and electromechanical processes, are grounded on systems of nonlinear differential equations with nonlinear coefficients (parameters of equivalent circuits of motors, varying in each operating point, including owing to appearances of saturation of magnetic system and current displacement in a winding of a rotor of an induction motor. For the purpose of raise of level of adequacy of models a magnetic circuit iron, additional and mechanical losses are considered. Results. Modelling of the several speed-controlled induction electric drives, different by components, but working on a loading equal on character, magnitude and a demanded control range is executed. At use of characteristic families including mechanical, at various parameters of regulating on which performances of the load mechanism are superimposed, the adjusting characteristics representing dependences of a modification of electrical, energy and thermal magnitudes from an angular speed of motors are gained. Originality. The offered complex models of speed-controlled induction electric drives with matching reducers and transformers, give the chance to realize well-founded sampling of components of drives. They also can be used as the design models by working out of speed-controlled induction motors. Practical value. Operating characteristics of various speed-controlled induction electric

  9. Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases.

    Science.gov (United States)

    Kelley, Brendan P; Klochko, Chad; Halabi, Safwan; Siegal, Daniel

    2016-06-01

    Retrospective data mining has tremendous potential in research but is time and labor intensive. Current data mining software contains many advanced search features but is limited in its ability to identify patients who meet multiple complex independent search criteria. Simple keyword and Boolean search techniques are ineffective when more complex searches are required, or when a search for multiple mutually inclusive variables becomes important. This is particularly true when trying to identify patients with a set of specific radiologic findings or proximity in time across multiple different imaging modalities. Another challenge that arises in retrospective data mining is that much variation still exists in how image findings are described in radiology reports. We present an algorithmic approach to solve this problem and describe a specific use case scenario in which we applied our technique to a real-world data set in order to identify patients who matched several independent variables in our institution's picture archiving and communication systems (PACS) database.

  10. Classification of normal and arrhythmic ECG using wavelet transform based template-matching technique.

    Science.gov (United States)

    Hassan, Wajahat; Saleem, Saqib; Habib, Aamir

    2017-06-01

    To propose a wavelet-based template matching technique to extract features for automatic classification of electrocardiogram signals of normal and arrhythmic individuals. The study was conducted from December 2014 to December 2015 at the Department of Electrical Engineering, Institute of Space Technology, Islamabad, Pakistan. Electrocardiogram signals analysed in this study were taken from the freely available database www.physionet.org. The data for normal subjects was taken from the Massachusetts Institute of Technology-Beth Israel Hospital's normal sinus rhythm database and data for diseased subjects was taken from the arrhythmia database. Of the 30 subjects, there were 15(50%) normal and 15(50%) diseased subjects. The group-averaged phase difference indices of arrhythmic subjects were significantly larger than that of normal individuals (p<0.05) within the frequency range of 0.9-1.1 Hz. Moreover, the scatter plot between the phase difference index and magnitude of wavelet cross-spectrum for frequency range of 0.9-1.1 Hz demonstrated a satisfactory delineation between normal and arrhythmic individuals. Wavelet decomposition-based template matching technique achieved satisfactory delineation of normal and arrhythmic electrocardiogram dynamics.

  11. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    OpenAIRE

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effe...

  12. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    Science.gov (United States)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  13. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  14. Mouse models for gastric cancer: Matching models to biological questions.

    Science.gov (United States)

    Poh, Ashleigh R; O'Donoghue, Robert J J; Ernst, Matthias; Putoczki, Tracy L

    2016-07-01

    Gastric cancer is the third leading cause of cancer-related mortality worldwide. This is in part due to the asymptomatic nature of the disease, which often results in late-stage diagnosis, at which point there are limited treatment options. Even when treated successfully, gastric cancer patients have a high risk of tumor recurrence and acquired drug resistance. It is vital to gain a better understanding of the molecular mechanisms underlying gastric cancer pathogenesis to facilitate the design of new-targeted therapies that may improve patient survival. A number of chemically and genetically engineered mouse models of gastric cancer have provided significant insight into the contribution of genetic and environmental factors to disease onset and progression. This review outlines the strengths and limitations of current mouse models of gastric cancer and their relevance to the pre-clinical development of new therapeutics. © 2016 The Authors Journal of Gastroenterology and Hepatology published by Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  15. A computer-video aided time motion analysis technique for match analysis.

    Science.gov (United States)

    Ali, A; Farrally, M

    1991-03-01

    The purpose of this study was to find out suitable methods for obtaining objective data on the time spent by players of different positions during walking, jogging, cruising, sprinting and standing still during match play activities. Computer programs and filming analyses with a simple notation system based upon symbolic representations of movements have been devised for analysis of individual players' behaviour. A technique was devised and employed with a small group of university players, aged 19-21 years of age. The subjects were filmed in several matches, and the video recordings were analysed using a microcomputer. The ratio of the time spent for the players were 56% walking, 30% jogging, 4% cruising, 3% sprinting and 7% standing still. ANOVA revealed that there are significant differences among the players for different positions on the field, for example the time spent on walking, jogging and standing still differed (P less than 0.05) among attackers, defenders and midfielders. A new method has been developed to obtain reliable information about the players' movement and performance in the game. The Authors believe that there should be further studies carried out involving more teams at different levels of performance to substantiate these preliminary findings.

  16. A Block-matching based technique for the analysis of 2D gel images.

    Science.gov (United States)

    Freire, Ana; Seoane, José A; Rodríguez, Alvaro; Ruiz-Romero, Cristina; López-Campos, Guillermo; Dorado, Julián

    2010-01-01

    Research at protein level is a useful practice in personalized medicine. More specifically, 2D gel images obtained after electrophoresis process can lead to an accurate diagnosis. Several computational approaches try to help the clinicians to establish the correspondence between pairs of proteins of multiple 2D gel images. Most of them perform the alignment of a patient image referred to a reference image. In this work, an approach based on block-matching techniques is developed. Its main characteristic is that it does not need to perform the whole alignment between two images considering each protein separately. A comparison with other published methods is presented. It can be concluded that this method works over broad range of proteomic images, although they have a high level of difficulty.

  17. The comparison of Co-60 and 4MV photons matching dosimetry during half-beam technique

    International Nuclear Information System (INIS)

    Cakir, Aydin; Bilge, Hatice; Dadasbilge, Alpar; Kuecuecuek, Halil; Okutan, Murat; Merdan Fayda, Emre

    2005-01-01

    In this phantom study, we tried to compare matching dosimetry differences between half-blocking of Co-60 and asymmetric collimation of the 4MV photons during craniospinal irradiation. The dose distributions are compared and discussed. Firstly, some gaps with different sizes are left between cranial and spinal field borders. Secondly, the fields are overlapped in the same sizes. We irradiate the films located in water-equivalent solid phantoms with Co-60 and 4MV photon beams. This study indicates that the field placement errors in +/- 1mm are acceptable for both Co-60 and 4MV photon energies during craniospinal irradiation with half-beam block technique. Within these limits the dose variations are specified in +/- 5%. However, the setup errors that are more than 1mm are unacceptable for both asymmetric collimation of 4MV photon and half-blocking of Co-60

  18. An Improved Map-Matching Technique Based on the Fréchet Distance Approach for Pedestrian Navigation Services

    Directory of Open Access Journals (Sweden)

    Yoonsik Bang

    2016-10-01

    Full Text Available Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS. PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles.

  19. Unconditional or Conditional Logistic Regression Model for Age-Matched Case-Control Data?

    Science.gov (United States)

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case-control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case-control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case-control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls.

  20. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  1. Data Matching Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection

    CERN Document Server

    Christen, Peter

    2012-01-01

    Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of da

  2. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  3. A numerical/empirical technique for history matching and predicting cyclic steam performance in Canadian oil sands reservoirs

    Science.gov (United States)

    Leshchyshyn, Theodore Henry

    The oil sands of Alberta contain some one trillion barrels of bitumen-in-place, most contained in the McMurray, Wabiskaw, Clearwater, and Grand Rapids formations. Depth of burial is 0--550 m, 10% of which is surface mineable, the rest recoverable by in-situ technology-driven enhanced oil recovery schemes. To date, significant commercial recovery has been attributed to Cyclic Steam Stimulation (CSS) using vertical wellbores. Other techniques, such as Steam Assisted Gravity Drainage (SAGD) are proving superior to other recovery methods for increasing early oil production but at initial higher development and/or operating costs. Successful optimization of bitumen production rates from the entire reservoir is ultimately decided by the operator's understanding of the reservoir in its original state and/or the positive and negative changes which occur in oil sands and heavy oil deposits upon heat stimulation. Reservoir description is the single most important factor in attaining satisfactory history matches and forecasts for optimized production of the commercially-operated processes. Reservoir characterization which lacks understanding can destroy a project. For example, incorrect assumptions in the geological model for the Wolf Lake Project in northeast Alberta resulted in only about one-half of the predicted recovery by the original field process. It will be shown here why the presence of thin calcite streaks within oil sands can determine the success or failure of a commercial cyclic steam project. A vast amount of field data, mostly from the Primrose Heavy Oil Project (PHOP) near Cold Lake, Alberta, enabled the development a simple set of correlation curves for predicting bitumen production using CSS. A previously calibtrated thermal numerical simulation model was used in its simplist form, that is, a single layer, radial grid blocks, "fingering" or " dilation" adjusted permeability curves, and no simulated fracture, to generate the first cycle production

  4. Can simple population genetic models reconcile partial match ...

    Indian Academy of Sciences (India)

    the product rule, population substructure, and relatedness to predict the expected number of matches in large databases. I find that there is a relatively narrow window of parameter values that can plausibly describe the Arizona results. Fur- ther research could help determine if the Arizona samples are congruent with some ...

  5. Generating Models of a Matched Formula with a Polynomial Delay

    Czech Academy of Sciences Publication Activity Database

    Savický, Petr; Kučera, P.

    2016-01-01

    Roč. 56, č. 6 (2016), s. 379-402 ISSN 1076-9757 R&D Projects: GA ČR GBP202/12/G061 Grant - others:GA ČR(CZ) GA15-15511S Institutional support: RVO:67985807 Keywords : conjunctive normal form * matched formula * pure literal satisfiable formula Subject RIV: BA - General Mathematics Impact factor: 2.284, year: 2016

  6. Business models for open innovation: Matching heterogeneous open innovation strategies with business model dimensions

    OpenAIRE

    Saebi, Tina; Foss, Nicolai Juul

    2015-01-01

    -This is the author's version of the article:"Business models for open innovation: Matching heterogeneous open innovation strategies with business model dimensions", European Management Journal, Volume 33, Issue 3, June 2015, Pages 201–213 Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies' business models are not attuned to open strategies. Ac...

  7. Circuit and Measurement Technique for Radiation Induced Drift in Precision Capacitance Matching

    Science.gov (United States)

    Prasad, Sudheer; Shankar, Krishnamurthy Ganapathy

    2013-04-01

    In the design of radiation tolerant precision ADCs targeted for space market, a matched capacitor array is crucial. The drift of capacitance ratios due to radiation should be small enough not to cause linearity errors. Conventional methods for measuring capacitor matching may not achieve the desired level of accuracy due to radiation induced gain errors in the measurement circuits. In this work, we present a circuit and method for measuring capacitance ratio drift to a very high accuracy (<; 1 ppm) unaffected by radiation levels up to 150 krad.

  8. Determination of lower and upper bounds of predicted production from history-matched models

    NARCIS (Netherlands)

    van Essen, G. M.; Kahrobaei, S.S.; van Oeveren, H.; van den Hof, P.M.J.; Jansen, J.D.

    2016-01-01

    We present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. The method consists of two steps: 1) performing a traditional computer-assisted history match of a reservoir model with the objective to

  9. A match-mismatch test of a stage model of behaviour change in tobacco smoking

    NARCIS (Netherlands)

    Dijkstra, A; Conijn, B; De Vries, H

    Aims An innovation offered by stage models of behaviour change is that of stage-matched interventions. Match-mismatch studies are the primary test of this idea but also the primary test of the validity of stage models. This study aimed at conducting such a test among tobacco smokers using the Social

  10. Scientist Role Models in the Classroom: How Important Is Gender Matching?

    Science.gov (United States)

    Conner, Laura D. Carsten; Danielson, Jennifer

    2016-01-01

    Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used…

  11. Practical guidance for the use of a pattern-matching technique in case-study research: a case presentation.

    Science.gov (United States)

    Almutairi, Adel F; Gardner, Glenn E; McCarthy, Alexandra

    2014-06-01

    This paper reports on a study that demonstrates how to apply pattern matching as an analytical method in case-study research. Case-study design is appropriate for the investigation of highly-contextualized phenomena that occur within the social world. Case-study design is considered a pragmatic approach that permits employment of multiple methods and data sources in order to attain a rich understanding of the phenomenon under investigation. The findings from such multiple methods can be reconciled in case-study analysis, specifically through a pattern-matching technique. Although this technique is theoretically explained in the literature, there is scant guidance on how to apply the method practically when analyzing data. This paper demonstrates the steps taken during pattern matching in a completed case-study project that investigated the influence of cultural diversity in a multicultural nursing workforce on the quality and safety of patient care. The example highlighted in this paper contributes to the practical understanding of the pattern-matching process, and can also make a substantial contribution to case-study methods. © 2013 Wiley Publishing Asia Pty Ltd.

  12. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    Energy Technology Data Exchange (ETDEWEB)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.; Czekala, Ian; Bailey, Vanessa P.; Follette, Katherine B. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA, 94305 (United States); Wang, Jason J.; Rosa, Robert J. De; Duchêne, Gaspard [Astronomy Department, University of California, Berkeley CA, 94720 (United States); Pueyo, Laurent [Space Telescope Science Institute, Baltimore, MD, 21218 (United States); Marley, Mark S. [NASA Ames Research Center, Mountain View, CA, 94035 (United States); Arriaga, Pauline; Fitzgerald, Michael P. [Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095 (United States); Barman, Travis [Lunar and Planetary Laboratory, University of Arizona, Tucson AZ, 85721 (United States); Bulger, Joanna [Subaru Telescope, NAOJ, 650 North A’ohoku Place, Hilo, HI 96720 (United States); Chilcote, Jeffrey [Dunlap Institute for Astronomy and Astrophysics, University of Toronto, Toronto, ON, M5S 3H4 (Canada); Cotten, Tara [Department of Physics and Astronomy, University of Georgia, Athens, GA, 30602 (United States); Doyon, Rene [Institut de Recherche sur les Exoplanètes, Départment de Physique, Université de Montréal, Montréal QC, H3C 3J7 (Canada); Gerard, Benjamin L. [University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 5C2 (Canada); Goodsell, Stephen J., E-mail: jruffio@stanford.edu [Gemini Observatory, 670 N. A’ohoku Place, Hilo, HI, 96720 (United States); and others

    2017-06-10

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  13. Weak Memory Models with Matching Axiomatic and Operational Definitions

    OpenAIRE

    Zhang, Sizhuo; Vijayaraghavan, Muralidaran; Lustig, Dan; Arvind

    2017-01-01

    Memory consistency models are notorious for being difficult to define precisely, to reason about, and to verify. More than a decade of effort has gone into nailing down the definitions of the ARM and IBM Power memory models, and yet there still remain aspects of those models which (perhaps surprisingly) remain unresolved to this day. In response to these complexities, there has been somewhat of a recent trend in the (general-purpose) architecture community to limit new memory models to being ...

  14. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  15. Pre- and post-operative evaluation of pincer-type femoroacetabular impingement during squat using image-matching techniques: A case report.

    Science.gov (United States)

    Yoshimoto, Kensei; Hamai, Satoshi; Higaki, Hidehiko; Gondo, Hirotaka; Ikebe, Satoru; Nakashima, Yasuharu

    2018-01-01

    Although combined evaluation of hip joint kinematics and bone morphology is necessary for accurate assessment of femoroacetabular impingement (FAI), there are no report which evaluated hip kinematics of pincer-type FAI. The pre- and postoperative hip kinematics of a 46-year-old man, with a pincer-type FAI during squat were evaluated using image-matching techniques and the rim-neck distance was measured. Preoperative simulation of squatting was also performed using patient's bone models and healthy subject's kinematics data to detect the overlapping lesion between the acetabulum and the femur. Post-acetabuloplasty, right coxalgia during squat disappeared, and the Harris Hip Score improved from 79 to 92 at one year after surgery. Posterior pelvic tilt, femoral and hip flexion angle changed from 24.0°, 101.1°, and 70.8° to 23.3°, 92.6°, and 63.3°, respectively. The minimum rim-neck distance at maximum hip flexion improved from 1.8mm to 7.3mm. We could evaluate both of hip kinematics and morphology with image-matching techniques, and could visualize the clearance between the femoral head-neck junction and the acetabular rim. Image-matching techniques were clinically useful to assist surgeons in detecting the location of the impingement and confirming resection of the pincer lesion post-operatively. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Approximate Matching as a Key Technique in Organization of Natural and Artificial Intelligence

    Science.gov (United States)

    Mack, Marilyn; Lapir, Gennadi M.; Berkovich, Simon

    2000-01-01

    The basic property of an intelligent system, natural or artificial, is "understanding". We consider the following formalization of the idea of "understanding" among information systems. When system I issues a request to system 2, it expects a certain kind of desirable reaction. If such a reaction occurs, system I assumes that its request was "understood". In application to simple, "push-button" systems the situation is trivial because in a small system the required relationship between input requests and desired outputs could be specified exactly. As systems grow, the situation becomes more complex and matching between requests and actions becomes approximate.

  17. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data

  18. Meta-analysis on Materials and Techniques for Laparotomy Closure: The MATCH Review.

    Science.gov (United States)

    Henriksen, N A; Deerenberg, E B; Venclauskas, L; Fortelny, R H; Miserez, M; Muysoms, F E

    2018-01-10

    The aim of this systematic review and meta-analysis was to evaluate closure materials and suture techniques for emergency and elective laparotomies. The primary outcome was incisional hernia after 12 months, and the secondary outcomes were burst abdomen and surgical site infection. A systematic literature search was conducted until September 2017. The quality of the RCTs was evaluated by at least 3 assessors using critical appraisal checklists. Meta-analyses were performed. A total of 23 RCTs were included in the meta-analysis. There was no evidence from RCTs using the same suture technique in both study arms that any suture material (fast-absorbable/slowly absorbable/non-absorbable) is superior in reducing incisional hernias. There is no evidence that continuous suturing is superior in reducing incisional hernias compared to interrupted suturing. When using a slowly absorbable suture for continuous suturing in elective midline closure, the small bites technique results in significantly less incisional hernias than a large bites technique (OR 0.41; 95% CI 0.19, 0.86). There is no high-quality evidence available concerning the best suture material or technique to reduce incisional hernia rate when closing a laparotomy. When using a slowly absorbable suture and a continuous suturing technique with small tissue bites, the incisional hernia rate is significantly reduced compared with a large bites technique.

  19. Model-based shape matching of orthopaedic implants in RSA and fluoroscopy

    NARCIS (Netherlands)

    Prins, Anne Hendrik

    2015-01-01

    Model-based shape matching is commonly used, for example to measure the migration of an implant with Roentgen stereophotogrammetric analysis (RSA) or to measure implant kinematics with fluoroscopy. The aim of this thesis was to investigate the general usability of shape matching and to improve the

  20. Endogenizing technological change. Matching empirical evidence to modeling needs

    Energy Technology Data Exchange (ETDEWEB)

    Pizer, William A. [Resources for the Future, 1616 P Street NW, Washington, DC, 20009 (United States); Popp, David [Department of Public Administration, Center for Policy Research, The Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244-1020 (United States); National Bureau of Economic Research (United States)

    2008-11-15

    Given that technologies to significantly reduce fossil fuel emissions are currently unavailable or only available at high cost, technological change will be a key component of any long-term strategy to reduce greenhouse gas emissions. In light of this, the amount of research on the pace, direction, and benefits of environmentally-friendly technological change has grown dramatically in recent years. This research includes empirical work estimating the magnitude of these effects, and modeling exercises designed to simulate the importance of endogenous technological change in response to climate policy. Unfortunately, few attempts have been made to connect these two streams of research. This paper attempts to bridge that gap. We review both the empirical and modeling literature on technological change. Our focus includes the research and development process, learning by doing, the role of public versus private research, and technology diffusion. Our goal is to provide an agenda for how both empirical and modeling research in these areas can move forward in a complementary fashion. In doing so, we discuss both how models used for policy evaluation can better capture empirical phenomena, and how empirical research can better address the needs of models used for policy evaluation. (author)

  1. Endogenizing technological change: Matching empirical evidence to modeling needs

    Energy Technology Data Exchange (ETDEWEB)

    Pizer, William A. [Resources for the Future, 1616 P Street NW, Washington, DC, 20009 (United States)], E-mail: pizer@rff.org; Popp, David [Department of Public Administration, Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244-1020 (United States); National Bureau of Economic Research (United States)], E-mail: dcpopp@maxwell.syr.edu

    2008-11-15

    Given that technologies to significantly reduce fossil fuel emissions are currently unavailable or only available at high cost, technological change will be a key component of any long-term strategy to reduce greenhouse gas emissions. In light of this, the amount of research on the pace, direction, and benefits of environmentally-friendly technological change has grown dramatically in recent years. This research includes empirical work estimating the magnitude of these effects, and modeling exercises designed to simulate the importance of endogenous technological change in response to climate policy. Unfortunately, few attempts have been made to connect these two streams of research. This paper attempts to bridge that gap. We review both the empirical and modeling literature on technological change. Our focus includes the research and development process, learning by doing, the role of public versus private research, and technology diffusion. Our goal is to provide an agenda for how both empirical and modeling research in these areas can move forward in a complementary fashion. In doing so, we discuss both how models used for policy evaluation can better capture empirical phenomena, and how empirical research can better address the needs of models used for policy evaluation.

  2. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  3. Pair Hidden Markov Model for Named Entity Matching

    NARCIS (Netherlands)

    Nabende, P.; Tiedemann, J.; Nerbonne, J.; Sobh, T.

    2010-01-01

    This paper introduces a pair-Hidden Markov Model (pair-HMM) for the task of evaluating the similarity between bilingual named entities. The pair-HMM is adapted from Mackay and Kondrak [1] who used it on the task of cognate identification and was later adapted by Wieling et al. [5] for Dutch dialect

  4. Models of Jupiter's Interior that match Juno's Gravity Measurements

    Science.gov (United States)

    Militzer, B.; Wahl, S. M.; Hubbard, W. B.; Guillot, T.; Miguel, Y.; Kaspi, Y.; Galanti, E.; Iess, L.; Folkner, W. M.; Helled, R.; Durante, D.; Parisi, M.; Lunine, J. I.; Bloxham, J.; Levin, S.; Connerney, J. E. P.; Stevenson, D. J.; Bolton, S. J.

    2017-12-01

    Since the Juno spacecraft entered into orbit around Jupiter in July of 2016, it has performed a number of remarkable measurements. With every close flyby, we obtain a new set of precise gravity data that allow us to constrain the planet's gravitational field with unprecedented precision. Already with the first two flybys, the field was constrained by one order of magnitude better than before and a discrepancy between contradictory sets of gravitational coefficients was settled immediately (Folkner et al. 2017, Bolton et al. 2017). However, the new measurements turned out to be a challenge to interpret. A number of new interior models have been constructed already. It appears that models with a dilute core are favored, suggesting that the heavy elements in the planet's center, that were essential for the planet's formation, are now spread out over a substantial fraction of the planets interior (Wahl et al. 2017). In this talk, we will also discuss the gravity signal of the atmospheric and deep interior flows. We will show that interior models can be used to derive constraints on how deep the observable zonal jets can penetrate into the planet's interior. We will relate our predictions to physical changes in the dense fluid that is composed of hydrogen, helium, and a small but important component of heavier elements. The central goal of our modeling effort is a new understanding of Jupiter's interior and origin that combines all the gravity, microwave, and magnetic field observations of the Juno spacecraft.

  5. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    Science.gov (United States)

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  6. Random Matching Models and Money : The Global Structure and Approximation of Stationary Equilibria

    NARCIS (Netherlands)

    Kamiya, K.; Talman, A.J.J.

    2003-01-01

    Random matching models with different states are an important class of dynamic games; for example, money search models, job search models, and some games in biology are special cases.In this paper, we investigate the basic structure of the models: the existence of equilibria, the global structure of

  7. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  8. Conventional QT Variability Measurement vs. Template Matching Techniques: Comparison of Performance Using Simulated and Real ECG

    Science.gov (United States)

    Baumert, Mathias; Starc, Vito; Porta, Alberto

    2012-01-01

    Increased beat-to-beat variability in the QT interval (QTV) of ECG has been associated with increased risk for sudden cardiac death, but its measurement is technically challenging and currently not standardized. The aim of this study was to investigate the performance of commonly used beat-to-beat QT interval measurement algorithms. Three different methods (conventional, template stretching and template time shifting) were subjected to simulated data featuring typical ECG recording issues (broadband noise, baseline wander, amplitude modulation) and real short-term ECG of patients before and after infusion of sotalol, a QT interval prolonging drug. Among the three algorithms, the conventional algorithm was most susceptible to noise whereas the template time shifting algorithm showed superior overall performance on simulated and real ECG. None of the algorithms was able to detect increased beat-to-beat QT interval variability after sotalol infusion despite marked prolongation of the average QT interval. The QTV estimates of all three algorithms were inversely correlated with the amplitude of the T wave. In conclusion, template matching algorithms, in particular the time shifting algorithm, are recommended for beat-to-beat variability measurement of QT interval in body surface ECG. Recording noise, T wave amplitude and the beat-rejection strategy are important factors of QTV measurement and require further investigation. PMID:22860030

  9. Conventional QT variability measurement vs. template matching techniques: comparison of performance using simulated and real ECG.

    Directory of Open Access Journals (Sweden)

    Mathias Baumert

    Full Text Available Increased beat-to-beat variability in the QT interval (QTV of ECG has been associated with increased risk for sudden cardiac death, but its measurement is technically challenging and currently not standardized. The aim of this study was to investigate the performance of commonly used beat-to-beat QT interval measurement algorithms. Three different methods (conventional, template stretching and template time shifting were subjected to simulated data featuring typical ECG recording issues (broadband noise, baseline wander, amplitude modulation and real short-term ECG of patients before and after infusion of sotalol, a QT interval prolonging drug. Among the three algorithms, the conventional algorithm was most susceptible to noise whereas the template time shifting algorithm showed superior overall performance on simulated and real ECG. None of the algorithms was able to detect increased beat-to-beat QT interval variability after sotalol infusion despite marked prolongation of the average QT interval. The QTV estimates of all three algorithms were inversely correlated with the amplitude of the T wave. In conclusion, template matching algorithms, in particular the time shifting algorithm, are recommended for beat-to-beat variability measurement of QT interval in body surface ECG. Recording noise, T wave amplitude and the beat-rejection strategy are important factors of QTV measurement and require further investigation.

  10. TECHNIQUES AND TACTICS IN BASKETBALL ACCORDING TO THE INTENSITY IN OFFICIAL MATCHES

    Directory of Open Access Journals (Sweden)

    José Francisco Daniel

    Full Text Available ABSTRACT Introduction: Basketball is characterized as an intermittent sport in which currently stand out the highest intensity in which the actions occur, demanding for sport performance the optimum and homogeneous development of physical, technical, tactical, psychological and intellectual components. In this sense, the understanding of the game according to the technical and tactical actions performed and the knowledge of body’s responses are important for planning, monitoring and control of the training. Objective: The aim of this study was to describe the intensity of basketball tactical actions and the relationships between technical actions and intensity during the different game periods (GP. Methods: Ten athletes of the Brazilian male basketball elite participated in this study (27.60±5.54 years, 192.62±7.63 cm, 91.60±11.51 kg, 10.66±4.11% of body fat in six official matches of the National Basketball League (LNB, Brazil. Anthropometric measures and motor tests were performed and tactical (defensive, offensive and transition, technical [shares number (SN and efficiency ratio (ER] and physical actions [percentage of lactate threshold heart rate (%HRthr] were correlated. Spearman’s correlation coefficient was used between SN, ER and %HRthr. Results: The main results point to: (1 positive and significant relationship (except the 4th GP between SN, ER and %HRthr; (2 tactical actions presented HR near the lactate threshold, being apparently the highest median for the transitions (107.4%HRthr. Conclusion: The game is intense, with moments of HRpeak, but the median is slightly above of HRthr, that it is where the best relationship between SN and ER occurs.

  11. eMatchSite: sequence order-independent structure alignments of ligand binding pockets in protein models.

    Directory of Open Access Journals (Sweden)

    Michal Brylinski

    2014-09-01

    Full Text Available Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4-9% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.

  12. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Science.gov (United States)

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-01-01

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410

  13. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  14. On a two-dimensional mode-matching technique for sound generation and transmission in axial-flow outlet guide vanes

    Science.gov (United States)

    Bouley, Simon; François, Benjamin; Roger, Michel; Posson, Hélène; Moreau, Stéphane

    2017-09-01

    The present work deals with the analytical modeling of two aspects of outlet guide vane aeroacoustics in axial-flow fan and compressor rotor-stator stages. The first addressed mechanism is the downstream transmission of rotor noise through the outlet guide vanes, the second one is the sound generation by the impingement of the rotor wakes on the vanes. The elementary prescribed excitation of the stator is an acoustic wave in the first case and a hydrodynamic gust in the second case. The solution for the response of the stator is derived using the same unified approach in both cases, within the scope of a linearized and compressible inviscid theory. It is provided by a mode-matching technique: modal expressions are written in the various sub-domains upstream and downstream of the stator as well as inside the inter-vane channels, and matched according to the conservation laws of fluid dynamics. This quite simple approach is uniformly valid in the whole range of subsonic Mach numbers and frequencies. It is presented for a two-dimensional rectilinear-cascade of zero-staggered flat-plate vanes and completed by the implementation of a Kutta condition. It is then validated in sound generation and transmission test cases by comparing with a previously reported model based on the Wiener-Hopf technique and with reference numerical simulations. Finally it is used to analyze the tonal rotor-stator interaction noise in a typical low-speed fan architecture. The interest of the mode-matching technique is that it could be easily transposed to a three-dimensional annular cascade in cylindrical coordinates in a future work. This makes it an attractive alternative to the classical strip-theory approach.

  15. Production Efficiency and Market Orientation in Food Crops in North West Ethiopia: Application of Matching Technique for Impact Assessment.

    Directory of Open Access Journals (Sweden)

    Habtamu Yesigat Ayenew

    Full Text Available Agricultural technologies developed by national and international research institutions were not benefiting the rural population of Ethiopia to the extent desired. As a response, integrated agricultural extension approaches are proposed as a key strategy to transform the smallholder farming sector. Improving Productivity and Market Success (IPMS of Ethiopian Farmers project is one of the development projects initiated by integrating productivity enhancement technological schemes with market development model. This paper explores the impact of the project intervention in the smallholder farmers' wellbeing.To test the research hypothesis of whether the project brought a significant change in the input use, marketed surplus, efficiency and income of farm households, we use a cross-section data from 200 smallholder farmers in Northwest Ethiopia, collected through multi-stage sampling procedure. To control for self-selection from observable characteristics of the farm households, we employ Propensity Score Matching (PSM. We finally use Data Envelopment Analysis (DEA techniques to estimate technical efficiency of farm households.The outcome of the research is in line with the premises that the participation of the household in the IPMS project improves purchased input use, marketed surplus, efficiency of farms and the overall gain from farming. The participant households on average employ more purchased agricultural inputs and gain higher gross margin from the production activities as compared to the non-participant households. The non-participant households on average supply less output (measured both in monetary terms and proportion of total produce to the market as compared to their participant counterparts. Except for the technical efficiency of production in potato, project participant households are better-off in production efficiency compared with the non-participant counterparts.We verified the idea that Improving Productivity and Market

  16. Wages, Training, and Job Turnover in a Search-Matching Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Nielsen, Michael Svarer

    1999-01-01

    In this paper we extend a job search-matching model with firm-specific investments in training developed by Mortensen (1998) to allow for different offer arrival rates in employment and unemployment. The model by Mortensen changes the original wage posting model (Burdett and Mortensen, 1998) in two...

  17. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    Science.gov (United States)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  18. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  19. Wideband simulation of earthquake ground motion by a spectrum-matching, multiple-pulse technique

    International Nuclear Information System (INIS)

    Gusev, A.; Pavlov, V.

    2006-04-01

    To simulate earthquake ground motion, we combine a multiple-point stochastic earthquake fault model and a suite of Green functions. Conceptually, our source model generalizes the classic one of Haskell (1966). At any time instant, slip occurs over a narrow strip that sweeps the fault area at a (spatially variable) velocity. This behavior defines seismic signals at lower frequencies (LF), and describes directivity effects. High-frequency (HF) behavior of source signal is defined by local slip history, assumed to be a short segment of pulsed noise. For calculations, this model is discretized as a grid of point subsources. Subsource moment rate time histories, in their LF part, are smooth pulses whose duration equals to the rise time. In their HF part, they are segments of non-Gaussian noise of similar duration. The spectral content of subsource time histories is adjusted so that the summary far-field signal follows certain predetermined spectral scaling law. The results of simulation depend on random seeds, and on particular values of such parameters as: stress drop; average and dispersion parameter for rupture velocity; rupture nucleation point; slip zone width/rise time, wavenumber-spectrum parameter defining final slip function; the degrees of non-Gaussianity for random slip rate in time, and for random final slip in space, and more. To calculate ground motion at a site, Green functions are calculated for each subsource-site pair, then convolved with subsource time functions and at last summed over subsources. The original Green function calculator for layered weakly inelastic medium is of discrete wavenumber kind, with no intrinsic limitations with respect to layer thickness or bandwidth. The simulation package can generate example motions, or used to study uncertainties of the predicted motion. As a test, realistic analogues of recorded motions in the epicentral zone of the 1994 Northridge, California earthquake were synthesized, and related uncertainties were

  20. Adiabatic perturbations in pre-big bang models: Matching conditions and scale invariance

    International Nuclear Information System (INIS)

    Durrer, Ruth; Vernizzi, Filippo

    2002-01-01

    At low energy, the four-dimensional effective action of the ekpyrotic model of the universe is equivalent to a slightly modified version of the pre-big bang model. We discuss cosmological perturbations in these models. In particular we address the issue of matching the perturbations from a collapsing to an expanding phase. We show that, under certain physically motivated and quite generic assumptions on the high energy corrections, one obtains n=0 for the spectrum of scalar perturbations in the original pre-big bang model (with a vanishing potential). With the same assumptions, when an exponential potential for the dilaton is included, a scale invariant spectrum (n=1) of adiabatic scalar perturbations is produced under very generic matching conditions, both in a modified pre-big bang and ekpyrotic scenario. We also derive the resulting spectrum for arbitrary power law scale factors matched to a radiation-dominated era

  1. Crystallographic study of grain refinement in aluminum alloys using the edge-to-edge matching model

    International Nuclear Information System (INIS)

    Zhang, M.-X.; Kelly, P.M.; Easton, M.A.; Taylor, J.A.

    2005-01-01

    The edge-to-edge matching model for describing the interfacial crystallographic characteristics between two phases that are related by reproducible orientation relationships has been applied to the typical grain refiners in aluminum alloys. Excellent atomic matching between Al 3 Ti nucleating substrates, known to be effective nucleation sites for primary Al, and the Al matrix in both close packed directions and close packed planes containing these directions have been identified. The crystallographic features of the grain refiner and the Al matrix are very consistent with the edge-to-edge matching model. For three other typical grain refiners for Al alloys, TiC (when a = 0.4328 nm), TiB 2 and AlB 2 , the matching only occurs between the close packed directions in both phases and between the second close packed plane of the Al matrix and the second close packed plane of the refiners. According to the model, it is predicted that Al 3 Ti is a more powerful nucleating substrate for Al alloy than TiC, TiB 2 and AlB 2 . This agrees with the previous experimental results. The present work shows that the edge-to-edge matching model has the potential to be a powerful tool in discovering new and more powerful grain refiners for Al alloys

  2. A high-resolution processing technique for improving the energy of weak signal based on matching pursuit

    Directory of Open Access Journals (Sweden)

    Shuyan Wang

    2016-05-01

    Full Text Available This paper proposes a new method to improve the resolution of the seismic signal and to compensate the energy of weak seismic signal based on matching pursuit. With a dictionary of Morlet wavelets, matching pursuit algorithm can decompose a seismic trace into a series of wavelets. We abstract complex-trace attributes from analytical expressions to shrink the search range of amplitude, frequency and phase. In addition, considering the level of correlation between constituent wavelets and average wavelet abstracted from well-seismic calibration, we can obtain the search range of scale which is an important adaptive parameter to control the width of wavelet in time and the bandwidth of frequency. Hence, the efficiency of selection of proper wavelets is improved by making first a preliminary estimate and refining a local selecting range. After removal of noise wavelets, we integrate useful wavelets which should be firstly executed by adaptive spectral whitening technique. This approach can improve the resolutions of seismic signal and enhance the energy of weak wavelets simultaneously. The application results of real seismic data show this method has a good perspective of application.

  3. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  4. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  5. A frequency response model matching method for PID controller design for processes with dead-time.

    Science.gov (United States)

    Anwar, Md Nishat; Pan, Somnath

    2015-03-01

    In this paper, a PID controller design method for the integrating processes based on frequency response matching is presented. Two approaches are proposed for the controller design. In the first approach, a double feedback loop configuration is considered where the inner loop is designed with a stabilizing gain. In the outer loop, the parameters of the PID controller are obtained by frequency response matching between the closed-loop system with the PID controller and a reference model with desired specifications. In the second approach, the design is directly carried out considering a desired load-disturbance rejection model of the system. In both the approaches, two low frequency points are considered for matching the frequency response, which yield linear algebraic equations, solution of which gives the controller parameters. Several examples are taken from the literature to demonstrate the effectiveness and to compare with some well known design methods. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  7. Real-time reservoir geological model updating using the hybrid EnKF and geostatistical technique

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Chen, S.; Yang, D. [Regina Univ., SK (Canada). Petroleum Technology Research Centre

    2008-07-01

    Reservoir simulation plays an important role in modern reservoir management. Multiple geological models are needed in order to analyze the uncertainty of a given reservoir development scenario. Ideally, dynamic data should be incorporated into a reservoir geological model. This can be done by using history matching and tuning the model to match the past performance of reservoir history. This study proposed an assisted history matching technique to accelerate and improve the matching process. The Ensemble Kalman Filter (EnKF) technique, which is an efficient assisted history matching method, was integrated with a conditional geostatistical simulation technique to dynamically update reservoir geological models. The updated models were constrained to dynamic data, such as reservoir pressure and fluid saturations, and approaches geologically realistic at each time step by using the EnKF technique. The new technique was successfully applied in a heterogeneous synthetic reservoir. The uncertainty of the reservoir characterization was significantly reduced. More accurate forecasts were obtained from the updated models. 3 refs., 2 figs.

  8. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models......Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs. Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper......-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts...

  9. The match-mismatch model of emotion processing styles and emotion regulation strategies in fibromyalgia.

    NARCIS (Netherlands)

    Geenen, R.; Ooijen-van der Linden, L. van; Lumley, M.A.; Bijlsma, J.W.J.; Middendorp, H. van

    2012-01-01

    OBJECTIVE: Individuals differ in their style of processing emotions (e.g., experiencing affects intensely or being alexithymic) and their strategy of regulating emotions (e.g., expressing or reappraising). A match-mismatch model of emotion processing styles and emotion regulation strategies is

  10. Modeling Behavior in Different Delay Match to Sample Tasksin One Simple Network

    Directory of Open Access Journals (Sweden)

    Yali eAmit

    2013-07-01

    Full Text Available Delay match to sample (DMS experiments provide an important link between the theory of recurrent network models and behavior and neural recordings. We define a simple recurrent network of binary neurons with stochastic neural dynamics and Hebbian synaptic learning. Most DMS experiments involve heavily learned images, and in this setting we propose a readout mechanism for match occurrence based on a smaller increment in overall network activity when the matched pattern is already in working memory, and a reset mechanism to clear memory from stimuli of previous trials using random network activity. Simulations show that this model accounts for a wide range of variations on the original DMS tasks, including ABBA tasks with distractors, and more general repetition detection tasks with both learned and novel images. The differences in network settings required for different tasks derive from easily defined changes in the levels of noise and inhibition. The same models can also explain experiments involving repetition detection with novel images, although in this case the readout mechanism for match is based on higher overall network activity. The models give rise to interesting predictions that may be tested in neural recordings.

  11. Towards an integrated workflow for structural reservoir model updating and history matching

    NARCIS (Netherlands)

    Leeuwenburgh, O.; Peters, E.; Wilschut, F.

    2011-01-01

    A history matching workflow, as typically used for updating of petrophysical reservoir model properties, is modified to include structural parameters including the top reservoir and several fault properties: position, slope, throw and transmissibility. A simple 2D synthetic oil reservoir produced by

  12. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  13. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  14. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  15. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  16. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  17. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  18. A parametric texture model based on deep convolutional features closely matches texture appearance for humans.

    Science.gov (United States)

    Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias

    2017-10-01

    Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.

  19. Assisted reproduction technique outcomes for fresh versus deferred cryopreserved day-2 embryo transfer: a retrospective matched cohort study.

    Science.gov (United States)

    Bourdon, Mathilde; Santulli, Pietro; Gayet, Vanessa; Maignien, Chloé; Marcellin, Louis; Pocate-Cheriet, Khaled; Chapron, Charles

    2017-03-01

    Ovarian stimulation could adversely affect endometrial receptivity and consequently embryo implantation. One emerging strategy is the 'freeze-all' approach. Most studies have focused on blastocyst transfers, with limited research on day-2 deferred cryopreserved embryo transfers. In this large retrospective cohort study, outcomes were compared between day-2 fresh versus deferred cryopreserved embryo transfers. After matching by age and number of previous cycles, 325 cycles were included in the fresh group and 325 in the deferred cryopreserved embryo transfers group: no significant differences were found between groups in implantation (0.20 ± 0.33 versus 0.17 ± 0.31, respectively) and ongoing pregnancy rates (21.85% versus 18.46%). Independent predictors for ongoing pregnancy after a multiple logistic regression analysis were the women's age (OR = 0.92; 95% CI 0.88 to 0.97), body mass index (OR = 0.94; 95% CI 0.89 to 0.99), the number of two pronuclei embryos (OR = 1.19; 95% CI 1.04 to 1.40) and at least one grade 1 embryo transferred (OR = 1.97; 95% CI 1.26 to 3.05). In the case of a day-2 embryo transfer, outcomes after treatment with assisted reproduction techniques are similar for fresh versus deferred cryopreserved embryo transfers when pre-transfer progesterone exposures are similar in the two groups. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  20. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  1. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  2. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  3. Gender Discrimination Estimation in a Search Model with Matching and Bargaining

    OpenAIRE

    Luca Flabbi

    2004-01-01

    Gender wage differentials, conditional on observed productivity characteristics, have been considered a possible indication of prejudice against women in the labor market. However, there is no conclusive evidence on whether these differentials are due to labor market discrimination or to unobserved productivity differences. The objective of this paper is to propose a solution for this identification problem by developing and estimating a search model of the labor market with matching, bargain...

  4. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  5. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    Directory of Open Access Journals (Sweden)

    Eva Lykkegaard

    2016-04-01

    Full Text Available Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics. Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did.

  6. Workshop on Computational Modelling Techniques in Structural ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 22; Issue 6. Workshop on Computational Modelling Techniques in Structural Biology. Information and Announcements Volume 22 Issue 6 June 2017 pp 619-619. Fulltext. Click here to view fulltext PDF. Permanent link:

  7. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    Science.gov (United States)

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-03-22

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  8. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  9. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2007-07-01

    Full Text Available Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure.

  10. Matched-Filter Thermography

    Directory of Open Access Journals (Sweden)

    Nima Tabatabaei

    2018-04-01

    Full Text Available Conventional infrared thermography techniques, including pulsed and lock-in thermography, have shown great potential for non-destructive evaluation of broad spectrum of materials, spanning from metals to polymers to biological tissues. However, performance of these techniques is often limited due to the diffuse nature of thermal wave fields, resulting in an inherent compromise between inspection depth and depth resolution. Recently, matched-filter thermography has been introduced as a means for overcoming this classic limitation to enable depth-resolved subsurface thermal imaging and improving axial/depth resolution. This paper reviews the basic principles and experimental results of matched-filter thermography: first, mathematical and signal processing concepts related to matched-fileting and pulse compression are discussed. Next, theoretical modeling of thermal-wave responses to matched-filter thermography using two categories of pulse compression techniques (linear frequency modulation and binary phase coding are reviewed. Key experimental results from literature demonstrating the maintenance of axial resolution while inspecting deep into opaque and turbid media are also presented and discussed. Finally, the concept of thermal coherence tomography for deconvolution of thermal responses of axially superposed sources and creation of depth-selective images in a diffusion-wave field is reviewed.

  11. Analogue pattern matching in a dendritic spine model based on phosphorylation of potassium channels.

    Science.gov (United States)

    Yang, K H; Blackwell, K T

    2000-11-01

    Modification of potassium channels by protein phosphorylation has been shown to play a role in learning and memory. If such memory storage machinery were part of dendritic spines, then a set of spines could act as an 'analogue pattern matching' device by learning a repeatedly presented pattern of synaptic activation. In this study, the plausibility of such analogue pattern matching is investigated in a detailed circuit model of a set of spines attached to a dendritic branch. Each spine head contains an AMPA synaptic channel in parallel with a calcium-dependent potassium channel whose sensitivity depends on its phosphorylation state. Repeated presentation of synaptic activity results in calcium activation of protein kinases and subsequent channel phosphorylation. Simulations demonstrate that signal strength is greatest when the synaptic input pattern is equal to the previously learned pattern, and smaller when components of the synaptic input pattern are either smaller or larger than corresponding components of the previously learned pattern. Therefore, our results indicate that dendritic spines may act as an analogue pattern matching device, and suggest that modulation of potassium channels by protein kinases may mediate neuronal pattern recognition.

  12. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  13. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    Science.gov (United States)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  14. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  15. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  16. Techniques to develop data for hydrogeochemical models

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, C.M.; Holcombe, L.J.; Gancarz, D.H.; Behl, A.E. (Radian Corp., Austin, TX (USA)); Erickson, J.R.; Star, I.; Waddell, R.K. (Geotrans, Inc., Boulder, CO (USA)); Fruchter, J.S. (Battelle Pacific Northwest Lab., Richland, WA (USA))

    1989-12-01

    The utility industry, through its research and development organization, the Electric Power Research Institute (EPRI), is developing the capability to evaluate potential migration of waste constitutents from utility disposal sites to the environment. These investigations have developed computer programs to predict leaching, transport, attenuation, and fate of inorganic chemicals. To predict solute transport at a site, the computer programs require data concerning the physical and chemical conditions that affect solute transport at the site. This manual provides a comprehensive view of the data requirements for computer programs that predict the fate of dissolved materials in the subsurface environment and describes techniques to measure or estimate these data. In this manual, basic concepts are described first and individual properties and their associated measurement or estimation techniques are described later. The first three sections review hydrologic and geochemical concepts, discuss data requirements for geohydrochemical computer programs, and describe the types of information the programs produce. The remaining sections define and/or describe the properties of interest for geohydrochemical modeling and summarize available technique to measure or estimate values for these properties. A glossary of terms associated with geohydrochemical modeling and an index are provided at the end of this manual. 318 refs., 9 figs., 66 tabs.

  17. Is There a Purchase Limit on Regional Growth? A Quasi-experimental Evaluation of Investment Grants Using Matching Techniques

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel; Paloyo, Alfredo R.; Alecke, Björn

    2015-01-01

    growth associated with a maximum subsidy level beyond which financial support does not generate further labor-productivity growth. In other words, there is a “purchase limit” on regional growth. Although the matching approach is very appealing due to its methodological rigor and didactical clarity...

  18. Stroke Lesions in a Large Upper Limb Rehabilitation Trial Cohort Rarely Match Lesions in Common Preclinical Models.

    Science.gov (United States)

    Edwardson, Matthew A; Wang, Ximing; Liu, Brent; Ding, Li; Lane, Christianne J; Park, Caron; Nelsen, Monica A; Jones, Theresa A; Wolf, Steven L; Winstein, Carolee J; Dromerick, Alexander W

    2017-06-01

    Stroke patients with mild-moderate upper extremity motor impairments and minimal sensory and cognitive deficits provide a useful model to study recovery and improve rehabilitation. Laboratory-based investigators use lesioning techniques for similar goals. To determine whether stroke lesions in an upper extremity rehabilitation trial cohort match lesions from the preclinical stroke recovery models used to drive translational research. Clinical neuroimages from 297 participants enrolled in the Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) study were reviewed. Images were characterized based on lesion type (ischemic or hemorrhagic), volume, vascular territory, depth (cortical gray matter, cortical white matter, subcortical), old strokes, and leukoaraiosis. Lesions were compared with those of preclinical stroke models commonly used to study upper limb recovery. Among the ischemic stroke participants, median infarct volume was 1.8 mL, with most lesions confined to subcortical structures (61%) including the anterior choroidal artery territory (30%) and the pons (23%). Of ICARE participants, stroke patients, but they represent a clinically and scientifically important subgroup. Compared with lesions in general stroke populations and widely studied animal models of recovery, ICARE participants had smaller, more subcortically based strokes. Improved preclinical-clinical translational efforts may require better alignment of lesions between preclinical and human stroke recovery models.

  19. Implementation of linguistic models by holographic technique

    Science.gov (United States)

    Pavlov, Alexander V.; Shevchenko, Yanina Y.

    2004-01-01

    In this paper we consider linguistic model as an algebraic model and restrict our consideration to the semantics only. The concept allows "natural-like" language to be used by human-teacher to describe for machine the way of the problem solving, which is based on human"s knowledge and experience. Such imprecision words as "big", "very big", "not very big", etc can be used for human"s knowledge representation. Technically, the problem is to match metric scale, used by the technical device, with the linguistic scale, intuitively formed by the person. We develop an algebraic description of 4-f Fourier-holography setup by using triangular norms based approach. In the model we use the Fourier-duality of the t-norms and t-conorms, which is implemented by 4-f Fourier-holography setup. We demonstrate the setup is described adequately by De-Morgan"s law for involution. Fourier-duality of the t-norms and t-conorms leads to fuzzy-valued logic. We consider General Modus Ponens rule implementation to define the semantical operators, which are adequate to the setup. We consider scales, formed in both +1 and -1 orders of diffraction. We use representation of linguistic labels by fuzzy numbers to form the scale and discuss the dependence of the scale grading on the holographic recording medium operator. To implement reasoning with multi-parametric input variable we use Lorentz function to approximate linguistic labels. We use an example of medical diagnostics for experimental illustration of reasoning on the linguistic scale.

  20. Graph configuration model based evaluation of the education-occupation match.

    Science.gov (United States)

    Gadar, Laszlo; Abonyi, Janos

    2018-01-01

    To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.

  1. Automation of distribution of students between graduate supervisors with application of two-sided matching model

    Directory of Open Access Journals (Sweden)

    Aleksandr G. Podvesovskii

    2017-12-01

    Full Text Available The article deals with an approach for modeling and software support of distribution of students between graduate supervisors at large graduate department. The approach is based on the stable matching problem and the Gale-Shapley deferred acceptance algorithm, and takes into account both students and supervisors’ preferences. The formalized description of distribution model is given, and the results of its practical verification are described. The advantages and disadvantages of the proposed approach are discussed, and the problem of preferences manipulation of graduate supervisors are examined. The architecture of the distribution support software system is presented, and some features of its implementation as a Web-service within the complex information system of the graduate department are described.

  2. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  3. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  4. Automated side-chain model building and sequence assignment by template matching

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2002-01-01

    A method for automated macromolecular side-chain model building and for aligning the sequence to the map is described. An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer

  5. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  6. Advances in transgenic animal models and techniques.

    Science.gov (United States)

    Ménoret, Séverine; Tesson, Laurent; Remy, Séverine; Usal, Claire; Ouisse, Laure-Hélène; Brusselle, Lucas; Chenouard, Vanessa; Anegon, Ignacio

    2017-10-01

    On May 11th and 12th 2017 was held in Nantes, France, the international meeting "Advances in transgenic animal models and techniques" ( http://www.trm.univ-nantes.fr/ ). This biennial meeting is the fifth one of its kind to be organized by the Transgenic Rats ImmunoPhenomic (TRIP) Nantes facility ( http://www.tgr.nantes.inserm.fr/ ). The meeting was supported by private companies (SONIDEL, Scionics computer innovation, New England Biolabs, MERCK, genOway, Journal Disease Models and Mechanisms) and by public institutions (International Society for Transgenic Technology, University of Nantes, INSERM UMR 1064, SFR François Bonamy, CNRS, Région Pays de la Loire, Biogenouest, TEFOR infrastructure, ITUN, IHU-CESTI and DHU-Oncogeffe and Labex IGO). Around 100 participants, from France but also from different European countries, Japan and USA, attended the meeting.

  7. The influence of geological data on the reservoir modelling and history matching process

    NARCIS (Netherlands)

    De Jager, G.

    2012-01-01

    For efficient production of hydrocarbons from subsurface reservoirs it is important to understand the spatial properties of the reservoir. As there is almost always too little information on the reservoir to build a representative model directly, other techniques have been developed for generating

  8. Matching theory

    CERN Document Server

    Plummer, MD

    1986-01-01

    This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.

  9. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  10. Automated main-chain model building by template matching and iterative fragment extension

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition

  11. Match and mismatch - comparing plant phenological metrics from ground-observations and from a prognostic model

    Science.gov (United States)

    Rutishauser, This; Stöckli, Reto; Jeanneret, François; Peñuelas, Josep

    2010-05-01

    Changes in the seasonality of life cycles of plants as recorded in phenological observations have been widely analysed at the species level with data available for many decades back in time. At the same time, seasonality changes in satellite-based observations and prognostic phenology models comprise information at the pixel-size or landscape scale. Change analysis of satellite-based records is restricted due to relatively short satellite records that further include gaps while model-based analyses are biased due to current model deficiencies., At 30 selected sites across Europe, we analysed three different sources of plant seasonality during the 1971-2000 period. Data consisted of (1) species-specific development stages of flowering and leave-out with different species observed at each site. (2) We used a synthetic phenological metric that integrates the common interannual phenological signal across all species at one site. (3) We estimated daily Leaf Area Index with a prognostic phenology model. The prior uncertainties of the model's empirical parameter space are constrained by assimilating the Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) and Leaf Area Index (LAI) from the MODerate Resolution Imaging Spectroradiometer (MODIS). We extracted the day of year when the 25%, 50% and 75% thresholds were passed each spring. The question arises how the three phenological signals compare and correlate across climate zones in Europe. Is there a match between single species observations, species-based ground-observed metrics and the landscape-scale prognostic model? Are there single key-species across Europe that best represent a landscape scale measure from the prognostic model? Can one source substitute another and serve as proxy-data? What can we learn from potential mismatches? Focusing on changes in spring this contribution presents first results of an ongoing comparison study from a number of European test sites that will be extended to

  12. Comparison of self-written waveguide techniques and bulk index matching for low-loss polymer waveguide interconnects

    Science.gov (United States)

    Burrell, Derek; Middlebrook, Christopher

    2016-03-01

    Polymer waveguides (PWGs) are used within photonic interconnects as inexpensive and versatile substitutes for traditional optical fibers. The PWGs are typically aligned to silica-based optical fibers for coupling. An epoxide elastomer is then applied and cured at the interface for index matching and rigid attachment. Self-written waveguides (SWWs) are proposed as an alternative to further reduce connection insertion loss (IL) and alleviate marginal misalignment issues. Elastomer material is deposited after the initial alignment, and SWWs are formed by injecting ultraviolet (UV) light into the fiber or waveguide. The coupled UV light cures a channel between the two differing structures. A suitable cladding layer can be applied after development. Such factors as longitudinal gap distance, UV cure time, input power level, polymer material selection and choice of solvent affect the resulting SWWs. Experimental data are compared between purely index-matched samples and those with SWWs at the fiber-PWG interface. It is shown that < 1 dB IL per connection can be achieved by either method and results indicate lowest potential losses associated with a fine-tuned self-writing process. Successfully fabricated SWWs reduce overall processing time and enable an effectively continuous low-loss rigid interconnect.

  13. On Improving Analytical Models of Cosmic Reionization for Matching Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kaurov, Alexander A. [Univ. of Chicago, IL (United States)

    2016-01-01

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emerged from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large scale statistical properties. These mock catalogs are particularly useful for CMB polarization and 21cm experiments, where large volumes are required to simulate the observed signal.

  14. Spatially Explicit Modeling Reveals Cephalopod Distributions Match Contrasting Trophic Pathways in the Western Mediterranean Sea.

    Directory of Open Access Journals (Sweden)

    Patricia Puerta

    Full Text Available Populations of the same species can experience different responses to the environment throughout their distributional range as a result of spatial and temporal heterogeneity in habitat conditions. This highlights the importance of understanding the processes governing species distribution at local scales. However, research on species distribution often averages environmental covariates across large geographic areas, missing variability in population-environment interactions within geographically distinct regions. We used spatially explicit models to identify interactions between species and environmental, including chlorophyll a (Chla and sea surface temperature (SST, and trophic (prey density conditions, along with processes governing the distribution of two cephalopods with contrasting life-histories (octopus and squid across the western Mediterranean Sea. This approach is relevant for cephalopods, since their population dynamics are especially sensitive to variations in habitat conditions and rarely stable in abundance and location. The regional distributions of the two cephalopod species matched two different trophic pathways present in the western Mediterranean Sea, associated with the Gulf of Lion upwelling and the Ebro river discharges respectively. The effects of the studied environmental and trophic conditions were spatially variant in both species, with usually stronger effects along their distributional boundaries. We identify areas where prey availability limited the abundance of cephalopod populations as well as contrasting effects of temperature in the warmest regions. Despite distributional patterns matching productive areas, a general negative effect of Chla on cephalopod densities suggests that competition pressure is common in the study area. Additionally, results highlight the importance of trophic interactions, beyond other common environmental factors, in shaping the distribution of cephalopod populations. Our study presents

  15. Assessment of model-based image-matching for future reconstruction of unhelmeted sport head impact kinematics.

    Science.gov (United States)

    Tierney, Gregory J; Joodaki, Hamed; Krosshaug, Tron; Forman, Jason L; Crandall, Jeff R; Simms, Ciaran K

    2018-03-01

    Player-to-player contact inherent in many unhelmeted sports means that head impacts are a frequent occurrence. Model-Based Image-Matching (MBIM) provides a technique for the assessment of three-dimensional linear and rotational motion patterns from multiple camera views of a head impact event, but the accuracy is unknown for this application. The goal of this study is to assess the accuracy of the MBIM method relative to reflective marker-based motion analysis data for estimating six degree of freedom head displacements and velocities in a staged pedestrian impact scenario at 40 km/h. Results showed RMS error was under 20 mm for all linear head displacements and 0.01-0.04 rad for head rotations. For velocities, the MBIM method yielded RMS errors between 0.42 and 1.29 m/s for head linear velocities and 3.53-5.38 rad/s for angular velocities. This method is thus beneficial as a tool to directly measure six degree of freedom head positional data from video of sporting head impacts, but velocity data is less reliable. MBIM data, combined in future with velocity/acceleration data from wearable sensors could be used to provide input conditions and evaluate the outputs of multibody and finite element head models for brain injury assessment of sporting head impacts.

  16. Detection, modeling and matching of pleural thickenings from CT data towards an early diagnosis of malignant pleural mesothelioma

    Science.gov (United States)

    Chaisaowong, Kraisorn; Kraus, Thomas

    2014-03-01

    Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.

  17. mr: A C++ library for the matching and running of the Standard Model parameters

    Science.gov (United States)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL

  18. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  19. Appraisal, coping, emotion, and performance during elite fencing matches: a random coefficient regression model approach.

    Science.gov (United States)

    Doron, J; Martinent, G

    2017-09-01

    Understanding more about the stress process is important for the performance of athletes during stressful situations. Grounded in Lazarus's (1991, 1999, 2000) CMRT of emotion, this study tracked longitudinally the relationships between cognitive appraisal, coping, emotions, and performance in nine elite fencers across 14 international matches (representing 619 momentary assessments) using a naturalistic, video-assisted methodology. A series of hierarchical linear modeling analyses were conducted to: (a) explore the relationships between cognitive appraisals (challenge and threat), coping strategies (task- and disengagement oriented coping), emotions (positive and negative) and objective performance; (b) ascertain whether the relationship between appraisal and emotion was mediated by coping; and (c) examine whether the relationship between appraisal and objective performance was mediated by emotion and coping. The results of the random coefficient regression models showed: (a) positive relationships between challenge appraisal, task-oriented coping, positive emotions, and performance, as well as between threat appraisal, disengagement-oriented coping and negative emotions; (b) that disengagement-oriented coping partially mediated the relationship between threat and negative emotions, whereas task-oriented coping partially mediated the relationship between challenge and positive emotions; and (c) that disengagement-oriented coping mediated the relationship between threat and performance, whereas task-oriented coping and positive emotions partially mediated the relationship between challenge and performance. As a whole, this study furthered knowledge during sport performance situations of Lazarus's (1999) claim that these psychological constructs exist within a conceptual unit. Specifically, our findings indicated that the ways these constructs are inter-related influence objective performance within competitive settings. © 2016 John Wiley & Sons A/S. Published by

  20. History matching of transient pressure build-up in a simulation model using adjoint method

    Energy Technology Data Exchange (ETDEWEB)

    Ajala, I.; Haekal, Rachmat; Ganzer, L. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany); Almuallim, H. [Firmsoft Technologies, Inc., Calgary, AB (Canada); Schulze-Riegert, R. [SPT Group GmbH, Hamburg (Germany)

    2013-08-01

    The aim of this work is the efficient and computer-assisted history-matching of pressure build-up and pressure derivatives by small modification to reservoir rock properties on a grid by grid level. (orig.)

  1. Developing a novel technique for absolute measurements of the principal- and second-shock Hugoniots: a benchmark for the impedance-match methods

    Science.gov (United States)

    Gu, Yunjun; Zheng, Jun; Chen, Qifeng; Li, Chengjun; Li, Jiangtao; Chen, Zhiyun

    2017-06-01

    A novel diagnostics configuration was presented for performing the absolute measurements of the principal- and second-shock Hugoniots of the dense gaseous H2+D2 mixtures under multi-shock compression and probing their thermodynamic properties by a joint diagnostics of multi-channel optical pyrometer (MCOP), Doppler Pin System (DPS), and streak camera. This technique allowed the time-resolved optical radiation histories, interface velocity profiles, and time-resolved spectrum of the multi-compressed sample to be simultaneously measured in a single shot. The shock wave velocities and particle velocities under the former two shock compressions can be directly determined with the help of the above multiple detects instead of the impedance-match methods. So, absolute measurements of the principal- and second-shock Hugoniots for pre-compressed dense gaseous H2+D2 mixtures under multi-shock compression can be achieved, which provides a benchmark for the impedance-match measurement technique. Furthermore, the combination of multiple diagnostics also allows different experimental observables to be cross-checked, which reinforces the reliability of the experimental measurements.

  2. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization ...

  3. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Abstract. Most of the existing modelling techniques for the speaker recog- nition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp.

  4. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Science.gov (United States)

    2012-01-01

    Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS) and Group Health Cooperative (GH), clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77), fillings (OR = 0.80) and crowns (OR = 0.84) (p diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes. PMID:22776352

  5. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  6. Comparison of Matching Pursuit Algorithm with Other Signal Processing Techniques for Computation of the Time-Frequency Power Spectrum of Brain Signals.

    Science.gov (United States)

    Chandran K S, Subhash; Mishra, Ashutosh; Shirhatti, Vinay; Ray, Supratim

    2016-03-23

    Signals recorded from the brain often show rhythmic patterns at different frequencies, which are tightly coupled to the external stimuli as well as the internal state of the subject. In addition, these signals have very transient structures related to spiking or sudden onset of a stimulus, which have durations not exceeding tens of milliseconds. Further, brain signals are highly nonstationary because both behavioral state and external stimuli can change on a short time scale. It is therefore essential to study brain signals using techniques that can represent both rhythmic and transient components of the signal, something not always possible using standard signal processing techniques such as short time fourier transform, multitaper method, wavelet transform, or Hilbert transform. In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show that it is able to capture both a sharp stimulus-onset transient and a sustained gamma rhythm in local field potential recorded from the primary visual cortex. We compare the performance of MP with other techniques and discuss its advantages and limitations. Data and codes for generating all time-frequency power spectra are provided. Copyright © 2016 the authors 0270-6474/16/363399-10$15.00/0.

  7. Plant Cell Population Tracking in a Honeycomb Structure Using an IMM Filter Based 3D Local Graph Matching Model.

    Science.gov (United States)

    Liu, Min; He, Yue; Qian, Weili; Wei, Yangliu; Liu, Xiaoyan

    2017-10-06

    Developing algorithms for plant cell population tracking is very critical for the modeling of plant cell growth pattern and gene expression dynamics. The tracking of plant cells in microscopic image stacks is very challenging for several reasons: (1) plant cells are densely packed in a specific honeycomb structure; (2) they are frequently dividing; (3) they are imaged in different layers within 3D image stacks. Based on an existing 2D local graph matching algorithm, this paper focuses on building a 3D plant cell matching model, by exploiting the cells' 3D spatiotemporal context. Furthermore, the Interacting Multi-Model filter (IMM) is combined with the 3D local graph matching model to track the plant cell population simultaneously. Because our tracking algorithm does not require the identification of "tracking seeds", the tracking stability and efficiency are greatly enhanced. Last, the plant cell lineages are achieved by associating the cell tracklets, using a maximum-a-posteriori (MAP) method. Compared with the 2D matching method, the experimental results on multiple datasets show that our proposed approach does not only greatly improve the tracking accuracy by 18%, but also successfully tracks the plant cells located at the high curvature primordial region, which is not addressed in previous work.

  8. Early outcome in renal transplantation from large donors to small and size-matched recipients - a porcine experimental model

    DEFF Research Database (Denmark)

    Ravlo, Kristian; Chhoden, Tashi; Søndergaard, Peter

    2012-01-01

    Kidney transplantation from a large donor to a small recipient, as in pediatric transplantation, is associated with an increased risk of thrombosis and DGF. We established a porcine model for renal transplantation from an adult donor to a small or size-matched recipient with a high risk of DGF...... and studied GFR, RPP using MRI, and markers of kidney injury within 10 h after transplantation. After induction of BD, kidneys were removed from ∼63-kg donors and kept in cold storage for ∼22 h until transplanted into small (∼15 kg, n = 8) or size-matched (n = 8) recipients. A reduction in GFR was observed...

  9. Records matching model for data survey on applied and experimental microbiology.

    Science.gov (United States)

    Reina, Salvatore A; Reina, Vito M; Debbia, Eugenio A

    2007-01-01

    Experimental microbiology yields a huge quantity of raw data which needs to be evaluated and classified in a wide variety of situation from marine research, environmental pollution and pharmacokinetics of antimicrobial agents to epidemiological clinical trials on infectious diseases. It is indispensable in all kinds of disciplines to validate, transform and correlate data clusters to demonstrate the statistical significance of results. Whether academic or biotechnological, the scientific credibility of a work is strongly affected by the statistical methods and their adequacy. For a simple univariate analysis, many commercial or open source software products are available to perform sophisticated statistics for discriminant and multi-factorial analysis, but the majority of scientists use statistics partially. This is due to the high competence level required by a multivariate approach. It is known that the choice of a test, correct distribution assumption, valid experimental design and preliminary raw data validation are prejudicial to good science. All kinds of experimentation need analytical interpretation of descriptive evidence so that a classical numerical approach is not enough when raw data are not validated or incomplete. Microbiologists always wish to quickly discriminate, or correlate, group and data clusters concerning clinical patient profiles, auditing multi-sensor derived numbers, monitoring biosphere indicators on either chemical and physical parameters or dynamics of microbe populations. Mathematical and statistical analysis is essential to distinguish phenotypes or constraints. Data are in general stored in spreadsheet and database files which change continuously depending on the data collection and scope. We here propose a Records Matching Method (RMM) suitable for any kind of cluster analysis and pattern identification which can be used for either parametric or non parametric analysis without necessarily stating the pre-process statistical

  10. Diffusion approximation of the radiative-conductive heat transfer model with Fresnel matching conditions

    Science.gov (United States)

    Chebotarev, Alexander Yu.; Grenkin, Gleb V.; Kovtanyuk, Andrey E.; Botkin, Nikolai D.; Hoffmann, Karl-Heinz

    2018-04-01

    The paper is concerned with a problem of diffraction type. The study starts with equations of complex (radiative and conductive) heat transfer in a multicomponent domain with Fresnel matching conditions at the interfaces. Applying the diffusion, P1, approximation yields a pair of coupled nonlinear PDEs describing the radiation intensity and temperature for each component of the domain. Matching conditions for these PDEs, imposed at the interfaces between the domain components, are derived. The unique solvability of the obtained problem is proven, and numerical experiments are conducted.

  11. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  12. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in

  13. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  14. Too Much Matching: A Social Relations Model Enhancement of the Pairing Game

    Science.gov (United States)

    Eastwick, Paul W.; Buck, April A.

    2014-01-01

    The Pairing Game is a popular classroom demonstration that illustrates how people select romantic partners who approximate their own desirability. However, this game produces matching correlations that greatly exceed the correlations that characterize actual romantic pairings, perhaps because the game does not incorporate the social relations…

  15. Analytical modelling of waveguide mode launchers for matched feed reflector systems

    DEFF Research Database (Denmark)

    Palvig, Michael Forum; Breinbjerg, Olav; Meincke, Peter

    2016-01-01

    Matched feed horns aim to cancel cross polarization generated in offset reflector systems. An analytical method for predicting the mode spectrum generated by inclusions in such horns, e.g. stubs and pins, is presented. The theory is based on the reciprocity theorem with the inclusions represented...

  16. Dynamic Response-by-Response Models of Matching Behavior in Rhesus Monkeys

    Science.gov (United States)

    Lau, Brian; Glimcher, Paul W.

    2005-01-01

    We studied the choice behavior of 2 monkeys in a discrete-trial task with reinforcement contingencies similar to those Herrnstein (1961) used when he described the matching law. In each session, the monkeys experienced blocks of discrete trials at different relative-reinforcer frequencies or magnitudes with unsignalled transitions between the…

  17. MENINGKATKAN HASIL BELAJAR SISWA DENGAN MENGGUNAKAN MODEL MAKE A MATCH PADA MATA PELAJARAN PKN DI KELAS V SDN KARYAWANGI 2

    Directory of Open Access Journals (Sweden)

    Sediasih Sediasih

    2017-03-01

    Full Text Available Abstrak. Hasil belajar siswa pada mata pelajaran pendidikan kewarganegaraan (PKn di SDN karyawangi 2 selama ini masih rendah. Model pembelajaran yang digunakan selama ini  masih menggunakan model konvensional dan hanya guru sebagai pusat pembelajaran. Tujuan dari penelitian perbaikan pembelajaran ini untuk meningkatkan hasil belajar siswa tentang materi menjaga keutuhan negara Indonesia dengan menggunakan model make a match. Penelitian dilaksanakan dalam tiga siklus perbaikan pada menjaga keutuhan, dimana masing-masing siklus terdiri dari empat tahapan yaitu perencanaan, pelaksanaan, observasi dan refleksi. Sebagai subjek penelitian adalah siswa kelas V sebanyak 30 siswa. Dalam pengumpulan data, metode yang digunakan sebagai metode pokok adalah tes tertulis. Hasil belajar dapat menunjukkan bahwa pembelajaran menjaga keutuhan negara Indonesia mengalami peningkatan. Metode penelitian ini dengan menggunakan penelitian tindakan kelas. Dari hasil dapat disimpulkan bila penggunaan model make a match dapat meningkatkan hasil belajar siswa pada materi menjaga keutuhan negara indonesia pada mata pelajaran pendidikan kewarganegaraan di kelas 5 SDN Karyawangi 2. Kata Kunci: hasil belajar, make a match, pendidikan kewarganegaraan.    Abstact. The outcomes learning of student learning on civic education subjects (Civics SDN karyawangi 2 for this is still low. The learning model is used for this are still using the conventional model and only teachers as a learning center. The purpose of this research study improvements to improve student learning outcomes of the material safeguarding the national integrity Indonesia using models make a match. Research carried out in three cycles of improvement in maintaining the integrity, in which each cycle consists of four phases: planning, implementation, observation and reflection. As research subjects are students of class V as many as 30 students. In collecting the data, the methods used as the principal method

  18. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  19. Pose Estimation using 1D Fourier Transform and Euclidean Distance Matching of CAD Model and Inspected Model Part

    International Nuclear Information System (INIS)

    Zulkoffli, Zuliani; Bakar, Elmi Abu

    2016-01-01

    This paper present pose estimation relation of CAD model object and Projection Real Object (PRI). Image sequence of PRI and CAD model rotate on z axis at 10 degree interval in simulation and real scene used in this experiment. All this image is go through preprocessing stage to rescale object size and image size and transform all the image into silhouette. Correlation of CAD and PRI image is going through in this stage. Magnitude spectrum shows a reliable value in range 0.99 to 1.00 and Phase spectrum correlation shows a fluctuate graph in range 0.56 - 0.97. Euclidean distance correlation graph for CAD and PRI shows 2 zone of similar value due to almost symmetrical object shape. Processing stage of retrieval inspected PRI image in CAD database was carried out using range phase spectrum and maximum magnitude spectrum value within ±10% tolerance. Additional processing stage of retrieval inspected PRI image using Euclidean distance within ±5% tolerance also carried out. Euclidean matching shows a reliable result compared to range phase spectrum and maximum magnitude spectrum value by sacrificing more than 5 times processing time. (paper)

  20. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  1. Hip and Ankle Kinematics in Noncontact Anterior Cruciate Ligament Injury Situations: Video Analysis Using Model-Based Image Matching.

    Science.gov (United States)

    Koga, Hideyuki; Nakamae, Atsuo; Shima, Yosuke; Bahr, Roald; Krosshaug, Tron

    2018-02-01

    Detailed kinematic descriptions of real anterior cruciate ligament (ACL) injury situations are limited to the knee only. To describe hip and ankle kinematics as well as foot position relative to the center of mass (COM) in ACL injury situations through use of a model-based image-matching (MBIM) technique. The distance between the projection of the COM on the ground and the base of support (BOS) (COM_BOS) normalized to the femur length was also evaluated. Descriptive laboratory study. Ten ACL injury video sequences from women's handball and basketball were analyzed. Hip and ankle joint kinematic values were obtained by use of MBIM. The mean hip flexion angle was 51° (95% CI, 41° to 63°) at initial contact and remained constant over the next 40 milliseconds. The hip was internally rotated 29° (95% CI, 18° to 39°) at initial contact and remained unchanged for the next 40 milliseconds. All of the injured patients landed with a heel strike with a mean dorsiflexion angle of 2° (95% CI, -9° to 14°), before reaching a flatfooted position 20 milliseconds later. The foot position was anterior and lateral to the COM in all cases. However, none of the results showed larger COM_BOS than 1.2, which has been suggested as a criterion for ACL injury risk. Hip kinematic values were consistent among the 10 ACL injury situations analyzed; the hip joint remained unchanged in a flexed and internally rotated position in the phase leading up to injury, suggesting that limited energy absorption took place at the hip. In all cases, the foot contacted the ground with the heel strike. However, relatively small COM_BOS distances were found, indicating that the anterior and lateral foot placement in ACL injury situations was not different from what can be expected in noninjury game situations.

  2. Ambient temperature modelling with soft computing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); De Felice, Matteo [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); University of Rome ' ' Roma 3' ' , Dipartimento di Informatica e Automazione (DIA), Via della Vasca Navale 79, 00146 Rome (Italy)

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  3. A method for matching the refractive index and kinematic viscosity of a blood analog for flow visualization in hydraulic cardiovascular models.

    Science.gov (United States)

    Nguyen, T T; Biadillah, Y; Mongrain, R; Brunette, J; Tardif, J C; Bertrand, O F

    2004-08-01

    In this work, we propose a simple method to simultaneously match the refractive index and kinematic viscosity of a circulating blood analog in hydraulic models for optical flow measurement techniques (PIV, PMFV, LDA, and LIF). The method is based on the determination of the volumetric proportions and temperature at which two transparent miscible liquids should be mixed to reproduce the targeted fluid characteristics. The temperature dependence models are a linear relation for the refractive index and an Arrhenius relation for the dynamic viscosity of each liquid. Then the dynamic viscosity of the mixture is represented with a Grunberg-Nissan model of type 1. Experimental tests for acrylic and blood viscosity were found to be in very good agreement with the targeted values (measured refractive index of 1.486 and kinematic viscosity of 3.454 milli-m2/s with targeted values of 1.47 and 3.300 milli-m2/s).

  4. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  5. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  6. A foundation for flow-based matching: using temporal logic and model checking

    DEFF Research Database (Denmark)

    Brunel, Julien Pierre Manuel; Doligez, Damien; Hansen, René Rydhof

    2009-01-01

    Reasoning about program control-flow paths is an important functionality of a number of recent program matching languages and associated searching and transformation tools. Temporal logic provides a well-defined means of expressing properties of control-flow paths in programs, and indeed an exten......Reasoning about program control-flow paths is an important functionality of a number of recent program matching languages and associated searching and transformation tools. Temporal logic provides a well-defined means of expressing properties of control-flow paths in programs, and indeed...... an extension of the temporal logic CTL has been applied to the problem of specifying and verifying the transformations commonly performed by optimizing compilers. Nevertheless, in developing the Coccinelle program transformation tool for performing Linux collateral evolutions in systems code, we have found...

  7. A Foundation for Flow-Based Program Matching Using Temporal Logic and Model Checking

    DEFF Research Database (Denmark)

    Brunel, Julien Pierre Manuel; Doligez, Damien; Hansen, Rene Rydhof

    2008-01-01

    Reasoning about program control-flow paths is an important functionality of a number of recent program matching languages and associated searching and transformation tools. Temporal logic provides a well-defined means of expressing properties of control-flow paths in programs, and indeed an exten......Reasoning about program control-flow paths is an important functionality of a number of recent program matching languages and associated searching and transformation tools. Temporal logic provides a well-defined means of expressing properties of control-flow paths in programs, and indeed...... an extension of the temporal logic CTL has been applied to the problem of specifying and verifying the transformations commonly performed by optimizing compilers. Nevertheless, in developing the Coccinelle program transformation tool for performing Linux collateral evolutions in systems code, we have found...

  8. Solving the Secondary Structure Matching Problem in Cryo-EM De Novo Modeling Using a Constrained K-Shortest Path Graph Algorithm.

    Science.gov (United States)

    Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing

    2014-01-01

    Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.

  9. Advanced techniques for modeling avian nest survival

    Science.gov (United States)

    Dinsmore, S.J.; White, Gary C.; Knopf, F.L.

    2002-01-01

    Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models

  10. Initialising reservoir models for history matching using pre-production 3D seismic data: constraining methods and uncertainties

    Science.gov (United States)

    Niri, Mohammad Emami; Lumley, David E.

    2017-10-01

    Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.

  11. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  12. Application of Convolution Perfectly Matched Layer in MRTD scattering model for non-spherical aerosol particles and its performance analysis

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Jiang, Zidong; Liu, Lei; Chen, Ming

    2017-10-01

    The performance of absorbing boundary condition (ABC) is an important factor influencing the simulation accuracy of MRTD (Multi-Resolution Time-Domain) scattering model for non-spherical aerosol particles. To this end, the Convolution Perfectly Matched Layer (CPML), an excellent ABC in FDTD scheme, is generalized and applied to the MRTD scattering model developed by our team. In this model, the time domain is discretized by exponential differential scheme, and the discretization of space domain is implemented by Galerkin principle. To evaluate the performance of CPML, its simulation results are compared with those of BPML (Berenger's Perfectly Matched Layer) and ADE-PML (Perfectly Matched Layer with Auxiliary Differential Equation) for spherical and non-spherical particles, and their simulation errors are analyzed as well. The simulation results show that, for scattering phase matrices, the performance of CPML is better than that of BPML; the computational accuracy of CPML is comparable to that of ADE-PML on the whole, but at scattering angles where phase matrix elements fluctuate sharply, the performance of CPML is slightly better than that of ADE-PML. After orientation averaging process, the differences among the results of different ABCs are reduced to some extent. It also can be found that ABCs have a much weaker influence on integral scattering parameters (such as extinction and absorption efficiencies) than scattering phase matrices, this phenomenon can be explained by the error averaging process in the numerical volume integration.

  13. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources.

    Science.gov (United States)

    Sudhir, Dass; Bandyopadhyay, M; Chakraborty, A

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  14. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  15. Radiological and clinical pneumonitis after stereotactic lung radiotherapy: a matched analysis of three-dimensional conformal and volumetric-modulated arc therapy techniques.

    Science.gov (United States)

    Palma, David A; Senan, Suresh; Haasbeek, Cornelis J A; Verbakel, Wilko F A R; Vincent, Andrew; Lagerwaard, Frank

    2011-06-01

    Lung fibrosis is common after stereotactic body radiotherapy (SBRT) for lung tumors, but the influence of treatment technique on rates of clinical and radiological pneumonitis is not well described. After implementing volumetric modulated arc therapy (RapidArc [RA]; Varian Medical Systems, Palo Alto, CA) for SBRT, we scored the early pulmonary changes seen with arc and conventional three-dimensional SBRT (3D-CRT). Twenty-five SBRT patients treated with RA were matched 1:2 with 50 SBRT patients treated with 3D-CRT. Dose fractionations were based on a risk-adapted strategy. Clinical pneumonitis was scored using Common Terminology Criteria for Adverse Events version 3.0. Acute radiological changes 3 months posttreatment were scored by three blinded observers. Relationships among treatment type, baseline factors, and outcomes were assessed using Spearman's correlation, Cochran-Mantel-Haenszel tests, and logistic regression. The RA and 3D-CRT groups were well matched. Forty-three patients (57%) had radiological pneumonitis 3 months after treatment. Twenty-eight patients (37%) had computed tomography (CT) findings of patchy or diffuse consolidation, and 15 patients (20%) had ground-glass opacities only. Clinical pneumonitis was uncommon, and no differences were seen between 3D-CRT vs. RA patients in rates of grade 2/3 clinical pneumonitis (6% vs. 4%, respectively; p = 0.99), moderate/severe radiological changes (24% vs. 36%, respectively, p = 0.28), or patterns of CT changes (p = 0.47). Radiological severity scores were associated with larger planning target volumes (p = 0.09) and extended fractionation (p = 0.03). Radiological changes after lung SBRT are common with both approaches, but no differences in early clinical or radiological findings were observed after RA. Longer follow-up will be required to exclude late changes. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  17. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  18. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  19. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    Science.gov (United States)

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method for constr......Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...... mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...

  1. Addressing Diverse Learner Preferences and Intelligences with Emerging Technologies: Matching Models to Online Opportunities

    Science.gov (United States)

    Zhang, Ke; Bonk, Curtis J.

    2008-01-01

    This paper critically reviews various learning preferences and human intelligence theories and models with a particular focus on the implications for online learning. It highlights a few key models, Gardner's multiple intelligences, Fleming and Mills' VARK model, Honey and Mumford's Learning Styles, and Kolb's Experiential Learning Model, and…

  2. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Feature extraction involves extracting speaker-specific features from the speech signal at reduced data rate. The extracted features are further combined using modelling techniques to generate speaker models. The speaker models are then tested using the features extracted from the test speech signal. The improvement in ...

  3. Development of a computerized method for identifying the posteroanterior and lateral views of chest radiographs by use of a template matching technique

    International Nuclear Information System (INIS)

    Arimura, Hidetaka; Katsuragawa, Shigehiko; Li Qiang; Ishida, Takayuki; Doi, Kunio

    2002-01-01

    In picture archiving and communications systems (PACS) or digital archiving systems, the information on the posteroanterior (PA) and lateral views for chest radiographs is often not recorded or is recorded incorrectly. However, it is necessary to identify the PA or lateral view correctly and automatically for quantitative analysis of chest images for computer-aided diagnosis. Our purpose in this study was to develop a computerized method for correctly identifying either PA or lateral views of chest radiographs. Our approach is to examine the similarity of a chest image with templates that represent the average chest images of the PA or lateral view for various types of patients. By use of a template matching technique with nine template images for patients of different size in two steps, correlation values were obtained for determining whether a chest image is either a PA or a lateral view. The templates for PA and lateral views were prepared from 447 PA and 200 lateral chest images. For a validation test, this scheme was applied to 1,000 test images consisting of 500 PA and 500 lateral chest radiographs, which are different from training cases. In the first step, 924 (92.4%) of the cases were correctly identified by comparison of the correlation values obtained with the three templates for medium-size patients. In the second step, the correlation values with the six templates for small and large patients were compared, and all of the remaining unidentifiable cases were identified correctly

  4. Model-based orientation-independent 3-D machine vision techniques

    Science.gov (United States)

    De Figueiredo, R. J. P.; Kehtarnavaz, N.

    1988-01-01

    Orientation-dependent techniques for the identification of a three-dimensional object by a machine vision system are represented in parts. In the first part, the data consist of intensity images of polyhedral objects obtained by a single camera, while in the second part, the data consist of range images of curved objects obtained by a laser scanner. In both cases, the attributed graphic representation of the object surface is used to drive the respective algorithm. In this representation, a graph node represents a surface patch and a link represents the adjacency between two patches. The attributes assigned to nodes are moment invariants of the corresponding face for polyhedral objects. For range images, the Gaussian curvature is used as a segmentation criterion for providing symbolic shape attributes. Identification is achieved by an efficient graph-matching algorithm used to match the graph obtained from the data to a subgraph of one of the model graphs stored in the commputer memory.

  5. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  6. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  7. Towards optimal packed string matching

    DEFF Research Database (Denmark)

    Ben-Kiki, Oren; Bille, Philip; Breslauer, Dany

    2014-01-01

    In the packed string matching problem, it is assumed that each machine word can accommodate up to α characters, thus an n-character string occupies n/α memory words.(a) We extend the Crochemore–Perrin constant-space O(n)-time string-matching algorithm to run in optimal O(n/α) time and even in real......-time, achieving a factor α speedup over traditional algorithms that examine each character individually. Our macro-level algorithm only uses the standard AC0 instructions of the word-RAM model (i.e. no integer multiplication) plus two specialized micro-level AC0 word-size packed-string instructions. The main word...... matching work.(b) We also consider the complexity of the packed string matching problem in the classical word-RAM model in the absence of the specialized micro-level instructions wssm and wslm. We propose micro-level algorithms for the theoretically efficient emulation using parallel algorithms techniques...

  8. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    International Nuclear Information System (INIS)

    Matsunobu, Y; Shiotsuki, K; Morishita, J

    2015-01-01

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body

  9. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Matsunobu, Y; Shiotsuki, K [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka (Japan); Morishita, J [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, JP (Japan)

    2015-06-15

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body.

  10. Matching Organs

    Science.gov (United States)

    ... to know FAQ Living donation What is living donation? Organs Types Being a living donor First steps Being ... brochures What Every Patient Needs to Know Living Donation Multiple Listing Visit UNOS Store Learn more How organs are matched How to become a living donor ...

  11. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  12. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  13. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    Science.gov (United States)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  14. Flexible multibody simulation of automotive systems with non-modal model reduction techniques

    Science.gov (United States)

    Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter

    2012-12-01

    The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.

  15. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  16. Application of the numerical modelling techniques to the simulation ...

    African Journals Online (AJOL)

    The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...

  17. Fuzzy Control Technique Applied to Modified Mathematical Model ...

    African Journals Online (AJOL)

    In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...

  18. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    Energy Technology Data Exchange (ETDEWEB)

    Sardanyes, Josep [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain)], E-mail: josep.sardanes@upf.edu; Sole, Ricard V. [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain); Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)

    2008-01-21

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.

  19. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  20. Do projections from bioclimatic envelope models and climate change metrics match?

    DEFF Research Database (Denmark)

    Garcia, Raquel A.; Cabeza, Mar; Altwegg, Res

    2016-01-01

    in the position of climatically suitable areas (models) greater for species in grid cells with climates projected to move farther in space (metrics)? Results: The changes in climatic suitability projected by the bioclimatic envelope models covaried with the climatic changes measured with the metrics. Agreement......Aim: Bioclimatic envelope models are widely used to describe changes in climatically suitable areas for species under future climate scenarios. Climate change metrics are applied independently of species data to characterize the spatio-temporal dynamics of climate, and have also been used...... as indicators of the exposure of species to climate change. Here, we investigate whether these two approaches provide qualitatively similar indications about where biodiversity is potentially most exposed to climate change. Location: Sub-Saharan Africa. Methods: We compared a range of climate change metrics...

  1. PENGARUH MODEL PEMBELAJARAN KOOPERATIF MAKE A MATCH BERBANTUAN SLIDE SHARE TERHADAP HASIL BELAJAR KOGNITIF IPS DAN KETERAMPILAN SOSIAL

    Directory of Open Access Journals (Sweden)

    Udin Cahya Ari Prastya

    2016-08-01

    Full Text Available This research is conducted due to the problems faced by the fifth graders of Ampelgading 01 Public Elementary School. They find difficulties in understanding social science subject, indicated by students’ learning outcomes. Only 5% students of class pass the Minimum Passing Criteria of 70. Teacher-centered learning decreases the interaction between teachers and students and students with students, which related to the development of social skills such. Therefore, interactive learning model is needed to build good classroom atmosphere and improve students’ interactions. One model of interactive learning is a Make a Match.This research used quantitative and quasi-experiment methods, Quasi-experimental design used is nonequivalent control group design, using independent t-test assisted with SPSS 16 software for data analysis.The research result presents following the treatment in experimental class using cooperative teaching model ‘Make a Match’ using slide share, average grade of posttest obtained from control group is 66,15 while the experimental class gained the average of 75,18; control class obtained social skills scores with the average of 45 and 61 for experimental class. t test result indicates the cognitive learning measured from gain score of pretest and posttest have significant value of 0.000 and social skills shows significant value of 0.000. It is known that 0.000> 0.05, indicates that is related to the effect of cooperative teaching model ‘Make a Match’ using slide share to the social science cognitive and social skill. Pelaksanaan penelitian ini dikarenakan adanya masalah yang dihadapi oleh siswa kelas V di SDN Ampelgading 01. Meraka merasa kesulitas dalam memahami materi mata pelajaran IPS, hal ini dibuktikan dengan nilai hasil belajar siswa yang mendapatkan nilai di atas KKM dengan nilai KKM 70 hanya 5% dari jumlah total keseluruan siswa. Pembelajaran guru yang bersifat aksi menimbulkan tidak adanya interaksi antara

  2. 2D hybrid analysis: Approach for building three-dimensional atomic model by electron microscopy image matching.

    Science.gov (United States)

    Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji

    2017-03-23

    In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.

  3. Addressing diverse learner preferences and intelligences with emerging technologies: Matching models to online opportunities

    Directory of Open Access Journals (Sweden)

    Ke Zhang

    2009-03-01

    Full Text Available This paper critically reviews various learning preferences and human intelligence theories and models with a particular focus on the implications for online learning. It highlights a few key models, Gardner’s multiple intelligences, Fleming and Mills’ VARK model, Honey and Mumford’s Learning Styles, and Kolb’s Experiential Learning Model, and attempts to link them to trends and opportunities in online learning with emerging technologies. By intersecting such models with online technologies, it offers instructors and instructional designers across educational sectors and situations new ways to think about addressing diverse learner needs, backgrounds, and expectations. Learning technologies are important for effective teaching, as are theories and models and theories of learning. We argue that more immense power can be derived from connections between the theories, models and learning technologies. Résumé : Cet article passe en revue de manière critique les divers modèles et théories sur les préférences d’apprentissage et l’intelligence humaine, avec un accent particulier sur les implications qui en découlent pour l’apprentissage en ligne. L’article présente quelques-uns des principaux modèles (les intelligences multiples de Gardner, le modèle VAK de Fleming et Mills, les styles d’apprentissage de Honey et Mumford et le modèle d’apprentissage expérientiel de Kolb et tente de les relier à des tendances et occasions d’apprentissage en ligne qui utilisent les nouvelles technologies. En croisant ces modèles avec les technologies Web, les instructeurs et concepteurs pédagogiques dans les secteurs de l’éducation ou en situation éducationnelle se voient offrir de nouvelles façons de tenir compte des divers besoins, horizons et attentes des apprenants. Les technologies d’apprentissage sont importantes pour un enseignement efficace, tout comme les théories et les modèles d’apprentissage. Nous sommes d

  4. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  5. Multi-Objective History Matching with a Proxy Model for the Characterization of Production Performances at the Shale Gas Reservoir

    Directory of Open Access Journals (Sweden)

    Jaejun Kim

    2017-04-01

    Full Text Available This paper presents a fast, reliable multi-objective history-matching method based on proxy modeling to forecast the production performances of shale gas reservoirs for which all available post-hydraulic-fracturing production data, i.e., the daily gas rate and cumulative-production volume until the given date, are honored. The developed workflow consists of distance-based generalized sensitivity analysis (DGSA to determine the spatiotemporal-parameter significance, fast marching method (FMM as a proxy model, and a multi-objective evolutionary algorithm to integrate the dynamic data. The model validation confirms that the FMM is a sound surrogate model working within an error of approximately 2% for the estimated ultimate recovery (EUR, and it is 11 times faster than a full-reservoir simulation. The predictive accuracy on future production after matching 1.5-year production histories is assessed to examine the applicability of the proposed method. The DGSA determines the effective parameters with respect to the gas rate and the cumulative volume, including fracture permeability, fracture half-length, enhanced permeability in the stimulated reservoir volume, and average post-fracturing porosity. A comparison of the prediction accuracy for single-objective optimization shows that the proposed method accurately estimates the recoverable volume as well as the production profiles to within an error of 0.5%, while the single-objective consideration reveals the scale-dependency problem with lesser accuracy. The results of this study are useful to overcome the time-consuming effort of using a multi-objective evolutionary algorithm and full-scale reservoir simulation as well as to conduct a more-realistic prediction of the shale gas reserves and the corresponding production performances.

  6. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    DEFF Research Database (Denmark)

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan

    2002-01-01

    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...... and marketing, global engineering, and customer relationship management. The reference models are the basis for the development of ICT infrastructure requirements. These in turn can be used for ICT infrastructure specification (sometimes referred to as 'ICT architecture').Part of the ICT architecture...... is industry-wide, part of it is industry-specific and a part is specific to the domains of the joint activity that characterises the given Virtual Enterprise Network at hand. The article advocates a step by step approach to building virtual enterprise capability....

  7. The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies

    DEFF Research Database (Denmark)

    Scheike, Thomas; Martinussen, T; Zhang, MJ

    2011-01-01

    leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM......) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....

  8. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  9. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds...

  10. Model and Simulation of a Tunable Birefringent Fiber Using Capillaries Filled with Liquid Ethanol for Magnetic Quasiphase Matching In-Fiber Isolator

    Directory of Open Access Journals (Sweden)

    Clint Zeringue

    2010-01-01

    Full Text Available A technique to tune a magnetic quasi-phase matching in-fiber isolator through the application of stress induced by two mutually orthogonal capillary tubes filled with liquid ethanol is investigated numerically. The results show that it is possible to “tune” the birefringence in these fibers over a limited range depending on the temperature at which the ethanol is loaded into the capillaries. Over this tuning range, the thermal sensitivity of the birefringence is an order-of-magnitude lower than conventional fibers, making this technique well suited for magnetic quasi-phase matching.

  11. When growth and photosynthesis don't match: implications for carbon balance models

    Science.gov (United States)

    Medlyn, B.; Mahmud, K.; Duursma, R.; Pfautsch, S.; Campany, C.

    2017-12-01

    Most models of terrestrial plant growth are based on the principle of carbon balance: that growth can be predicted from net uptake of carbon via photosynthesis. A key criticism leveled at these models by plant physiologists is that there are many circumstances in which plant growth appears to be independent of photosynthesis: for example, during the onset of drought, or with rising atmospheric CO2 concentration. A crucial problem for terrestrial carbon cycle models is to develop better representations of plant carbon balance when there is a mismatch between growth and photosynthesis. Here we present two studies providing insight into this mismatch. In the first, effects of root restriction on plant growth were examined by comparing Eucalyptus tereticornis seedlings growing in containers of varying sizes with freely-rooted seedlings. Root restriction caused a reduction in photosynthesis, but this reduction was insufficient to explain the even larger reduction observed in growth. We applied data assimilation to a simple carbon balance model to quantify the response of carbon balance as a whole in this experiment. We inferred that, in addition to photosynthesis, there are significant effects of root restriction on growth respiration, carbon allocation, and carbohydrate utilization. The second study was carried out at the EucFACE Free-Air CO2 Enrichment experiment. At this experiment, photosynthesis of the overstorey trees is increased with enriched CO2, but there is no significant effect on above-ground productivity. These mature trees have reached their maximum height but are at significant risk of canopy loss through disturbance, and we hypothesized that additional carbon taken up through photosynthesis is preferentially allocated to storage rather than growth. We tested this hypothesis by measuring stemwood non-structural carbohydrates (NSC) during a psyllid outbreak that completely defoliated the canopy in 2015. There was a significant drawdown of NSC during

  12. Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling

    Science.gov (United States)

    Li, Gang; Han, Bo

    2017-09-01

    For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.

  13. Optimal Packed String Matching

    DEFF Research Database (Denmark)

    Ben-Kiki, Oren; Bille, Philip; Breslauer, Dany

    2011-01-01

    In the packed string matching problem, each machine word accommodates – characters, thus an n-character text occupies n/– memory words. We extend the Crochemore-Perrin constantspace O(n)-time string matching algorithm to run in optimal O(n/–) time and even in real-time, achieving a factor – speedup...... over traditional algorithms that examine each character individually. Our solution can be efficiently implemented, unlike prior theoretical packed string matching work. We adapt the standard RAM model and only use its AC0 instructions (i.e., no multiplication) plus two specialized AC0 packed string...

  14. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  15. Producer-decomposer matching in a simple model ecosystem: A network coevolutionary approach to ecosystem organization

    International Nuclear Information System (INIS)

    Higashi, Masahiko; Yamamura, Norio; Nakajima, Hisao; Abe, Takuya

    1993-01-01

    The present not is concerned with how the ecosystem maintains its energy and matter processes, and how those processes change throughout ecological and geological time, or how the constituent biota of an ecosystem maintain their life, and how ecological (species) succession and biological evolution proceed within an ecosystem. To advance further Tansky's (1976) approach to ecosystem organization, which investigated the characteristic properties of the developmental process of a model ecosystem, by applying Margalef's (1968) maximum maturity principle to derive its long term change, we seek a course for deriving the macroscopic trends along the organization process of an ecosystem as a consequence of the interactions among its biotic components and their modification of ecological traits. Using a simple ecosystem model consisting of four aggregated components (open-quotes compartmentsclose quotes) connected by nutrient flows, we investigate how a change in the value of a parameter alters the network pattern of flows and stocks, even causing a change in the value of another parameter, which in turn brings about further change in the network pattern and values of some (possible original) parameters. The continuation of this chain reaction involving feedbacks constitutes a possible mechanism for the open-quotes coevolutionclose quotes or open-quotes matchingclose quotes among flows, stocks, and parameters

  16. Empirical model for matching spectrophotometric reflectance of yarn windings and multispectral imaging reflectance of single strands of yarns.

    Science.gov (United States)

    Luo, Lin; Shen, Hui-Liang; Shao, Si-Jie; Xin, John

    2015-08-01

    The state-of-the-art multispectral imaging system can directly acquire the reflectance of a single strand of yarn that is impossible for traditional spectrophotometers. Instead, the spectrophotometric reflectance of a yarn winding, which is constituted by yarns wound on a background card, is regarded as the yarn reflectance in textile. While multispectral imaging systems and spectrophotometers can be separately used to acquire the reflectance of a single strand of yarn and corresponding yarn winding, the quantitative relationship between them is not yet known. In this paper, the relationship is established based on models that describe the spectral response of a spectrophotometer to a yarn winding and that of a multispectral imaging system to a single strand of yarn. The reflectance matching function from a single strand of yarn to corresponding yarn winding is derived to be a second degree polynomial function, which coefficients are the solutions of a constrained nonlinear optimization problem. Experiments on 100 pairs of samples show that the proposed approach can reduce the color difference between yarn windings and single strands of yarns from 2.449 to 1.082 CIEDE2000 units. The coefficients of the optimal reflection matching function imply that the reflectance of a yarn winding measured by a spectrophotometer consists of not only the intrinsic reflectance of yarn but also the nonignorable interreflection component between yarns.

  17. Decision Support Model for User Submission Approval Energy Partners Candidate Using Profile Matching Method and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Moedjiono Moedjiono

    2016-11-01

    Full Text Available In the field of services, customer satisfaction is a very important factor and determine the success of an enterprise. In the field of outsourcing, customer satisfaction indicator is the labor required delivery in a timely manner and has a level of quality in accordance with the terms proposed by the customer. To provide the best talent to customers, team recruitment and selection must perform a series of tests with a variety of methods to match the criteria of office given by the user with the criteria owned candidates and in order to support growth in graduation rates force a partner at the stage of user approval. For this purpose, the authors conducted a study with the method of observation, interviews, document reviews the candidate recruitment process, so as to provide recommendations for candidates with the highest quality delivery to the user at the stage of approval. The author put forward a model of decision support that is supported by the method of profile matching and Analytical Hierarchy Process (AHP in problem solving. The final results of this study can be used to support a decision in order to improve the effectiveness of the delivery of quality candidates, increase customer satisfaction, lower costs and improve gross operational margin of the company.

  18. Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis

    Directory of Open Access Journals (Sweden)

    Benn eMacdonald

    2015-11-01

    Full Text Available Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs, is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.

  19. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    Science.gov (United States)

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  20. Three-dimensional analysis of accuracy of patient-matched instrumentation in total knee arthroplasty: Evaluation of intraoperative techniques and postoperative alignment.

    Science.gov (United States)

    Kuwashima, Umito; Mizu-Uchi, Hideki; Okazaki, Ken; Hamai, Satoshi; Akasaki, Yukio; Murakami, Koji; Nakashima, Yasuharu

    2017-11-01

    It is questionable that the accuracies of patient-matched instrumentation (PMI) have been controversial, even though many surgeons follow manufacturers' recommendations. The purpose of this study was to evaluate the accuracy of intraoperative procedures and the postoperative alignment of the femoral side using PMI with 3-dimensional (3D) analysis. Eighteen knees that underwent total knee arthroplasty using MRI-based PMI were assessed. Intraoperative alignment and bone resection errors of the femoral side were evaluated with a CT-based navigation system. A conventional adjustable guide was used to compare cartilage data with that derived by PMI intraoperatively. Postoperative alignment was assessed using a 3D coordinate system with a computer-assisted design software. We also measured the postoperative alignments using conventional alignment guides with the 3D evaluation. Intraoperative coronal alignment with PMI was 90.9° ± 1.6°. Seventeen knees (94.4%) were within 3° of the optimal alignment. Intraoperative rotational alignment of the femoral guide position of PMI was 0.2° ± 1.6°compared with the adjustable guide, with 17 knees (94.4%) differing by 3° or less between the two methods. Maximum differences in coronal and rotation alignment before and after bone cutting were 2.0° and 2.8°, respectively. Postoperative coronal and rotational alignments were 89.4° ± 1.8° and -1.1° ± 1.3°, respectively. In both alignments, 94.4% of cases were within 3° of the optimal value. The PMI group had less outliers than conventional group in rotational alignment (p = 0.018). Our 3D analysis provided evidence that PMI system resulted in reasonably satisfactory alignments both intraoperatively and postoperatively. Surgeons should be aware that certain surgical techniques including bone cutting, and the associated errors may affect postoperative alignment despite accurate PMI positioning. Copyright © 2017 The Japanese Orthopaedic Association. Published by

  1. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  2. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  3. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  4. Review og pattern matching approaches

    DEFF Research Database (Denmark)

    Manfaat, D.; Duffy, Alex; Lee, B. S.

    1996-01-01

    This paper presents a review of pattern matching techniques. The application areas for pattern matching are extensive, ranging from CAD systems to chemical analysis and from manufacturing to image processing. Published techniques and methods are classified and assessed within the context of three...... key issues: pattern classes, similiarity types and mathing methods. It has been shown that the techniques and approaches are as diverse and varied as the applications....

  5. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  6. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  7. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  8. Effects of Peer Modelling Technique in Reducing Substance Abuse ...

    African Journals Online (AJOL)

    The study investigated the effects of peer modelling techniques in reducing substance abuse among undergraduates in Nigeria. The participants were one hundred and twenty (120) undergraduate students in 100 and 400 levels respectively. There are two groups: one treatment group and one control group.

  9. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  10. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  11. EMBuilder: A Template Matching-based Automatic Model-building Program for High-resolution Cryo-Electron Microscopy Maps.

    Science.gov (United States)

    Zhou, Niyun; Wang, Hongwei; Wang, Jiawei

    2017-06-01

    The resolution of electron-potential maps in single-particle cryo-electron microscopy (cryoEM) is approaching atomic or near- atomic resolution. However, no program currently exists for de novo cryoEM model building at resolutions exceeding beyond 3.5 Å. Here, we present a program, EMBuilder, based on template matching, to generate cryoEM models at high resolution. The program identifies features in both secondary-structure and Cα stages. In the secondary structure stage, helices and strands are identified with pre-computed templates, and the voxel size of the entire map is then refined to account for microscopic magnification errors. The identified secondary structures are then extended from both ends in the Cα stage via a log-likelihood (LLK) target function, and if possible, the side chains are also assigned. This program can build models of large proteins (~1 MDa) in a reasonable amount of time (~1 day) and thus has the potential to greatly decrease the manual workload required for model building of high-resolution cryoEM maps.

  12. The Galaxies Hubble Sequence Through CosmicTimes: Applying Parameter Optimization And Constraints From The Abundance Matching Technique To The 'Next Generation' of Large Cosmological Simulations.

    Science.gov (United States)

    Governato, Fabio

    The physical processes shaping the galaxies 'Hubble Sequence' are still poorly understood. Are gas outflows generated by Supernovae the main mechanism responsible for regulating star formation and the establishing the stellar mass - metallicity relation? What fraction of stars now in spheroids was originated in mergers? How does the environment of groups and clusters affect the evolution of galaxy satellites? The PI will study these problems analyzing a new set of state of the art hydro simulations of uniform cosmological volumes. This project has already been awarded a computational budget of 200 million CPU hours (but has only limited seed funding for science, hence this proposal). The best simulations will match the force and spatial resolution of the current best 'zoomed in' runs, as 'Eris' and will yield the first large statistical sample (1500+) of internally resolved galaxy systems with stellar masses ranging from from 10^7 to 10^10.5 solar masses. These simulations will allow us, for the very first time on such a large statistical set, to fully map the thermodynamical history of the baryons of internally resolved galaxies and identify the relative importance of the processes that shape their evolution as a function of stellar mass and cosmic time. As a novel, significant improvement over previous works we will introduce a new, unbiased statistical approach to the exploration of parameter space to optimize the model for star formation (SF) and feedback from supernovae and super massive back holes. This approach will also be used to evaluate the effects of resolution. The simulations will be run using ChaNGa, an improved version of Gasoline. Our flagship run will model a large volume of space (15.6k cubic Mpc) using 25 billion resolution elements. ChaNGa currently scales up to 35,000 cores and include a new version of the SPH implementation that drastically improves the description of temperature/density discontinuities and Kelvin-Helmholtz instabilities (and

  13. Using crosswell data to enhance history matching

    KAUST Repository

    Ravanelli, Fabio M.

    2014-01-01

    One of the most challenging tasks in the oil industry is the production of reliable reservoir forecast models. Due to different sources of uncertainties in the numerical models and inputs, reservoir simulations are often only crude approximations of the reality. This problem is mitigated by conditioning the model with data through data assimilation, a process known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use of time-lapse data and advanced data assimilation techniques. One of the most promising data assimilation techniques employed in the industry is the ensemble Kalman filter (EnKF) because of its ability to deal with non-linear models at reasonable computational cost. In this paper we study the use of crosswell seismic data as an alternative to 4D seismic surveys in areas where it is not possible to re-shoot seismic. A synthetic reservoir model is used in a history matching study designed better estimate porosity and permeability distributions and improve the quality of the model to predict future field performance. This study is divided in three parts: First the use of production data only is evaluated (baseline for benchmark). Second the benefits of using production and 4D seismic data are assessed. Finally, a new conceptual idea is proposed to obtain time-lapse information for history matching. The use of crosswell time-lapse seismic tomography to map velocities in the interwell region is demonstrated as a potential tool to ensure survey reproducibility and low acquisition cost when compared with full scale surface surveys. Our numerical simulations show that the proposed method provides promising history matching results leading to similar estimation error reductions when compared with conventional history matched surface seismic data.

  14. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  15. Matchings with Externalities and Attitudes

    DEFF Research Database (Denmark)

    Branzei, Simina; Michalak, Tomasz; Rahwan, Talal

    2013-01-01

    Two-sided matchings are an important theoretical tool used to model markets and social interactions. In many real-life problems the utility of an agent is influenced not only by their own choices, but also by the choices that other agents make. Such an influence is called an externality. Whereas...... fully expressive representations of externalities in matchings require exponential space, in this paper we propose a compact model of externalities, in which the influence of a match on each agent is computed additively. Under this framework, we analyze many-to-many matchings and one-to-one matchings...

  16. Modelled hydraulic redistribution by sunflower (Helianthus annuus L.) matches observed data only after including night-time transpiration.

    Science.gov (United States)

    Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele

    2014-04-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.

  17. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  18. Image-guided depth propagation for 2-D-to-3-D video conversion using superpixel matching and adaptive autoregressive model

    Science.gov (United States)

    Cai, Jiji; Jung, Cheolkon

    2017-09-01

    We propose image-guided depth propagation for two-dimensional (2-D)-to-three-dimensional (3-D) video conversion using superpixel matching and the adaptive autoregressive (AR) model. We adopt key frame-based depth propagation that propagates the depth map in the key frame to nonkey frames. Moreover, we use the adaptive AR model for depth refinement to penalize depth-color inconsistency. First, we perform superpixel matching to estimate motion vectors at the superpixel level instead of block matching based on the fixed block size. Then, we conduct depth compensation based on motion vectors to generate the depth map in the nonkey frame. However, the size of two superpixels is not exactly the same due to the segment-based matching, which causes matching errors in the compensated depth map. Thus, we introduce an adaptive image-guided AR model to minimize matching errors and produce the final depth map by minimizing AR prediction errors. Finally, we employ depth-image-based rendering to generate stereoscopic views from 2-D videos and their depth maps. Experimental results demonstrate that the proposed method successfully performs depth propagation and produces high-quality depth maps for 2-D-to-3-D video conversion.

  19. Advanced techniques in reliability model representation and solution

    Science.gov (United States)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  20. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  2. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  3. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    OpenAIRE

    Frederico R. Romero; Claudemir Trapp; Michael Muntener; Fabio A. Brito; Louis R. Kavoussi; Thomas W. Jarrett

    2007-01-01

    OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbabl...

  4. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  5. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  6. A finite element model updating technique for adjustment of parameters near boundaries

    Science.gov (United States)

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  7. Fuzzy techniques for subjective workload-score modeling under uncertainties.

    Science.gov (United States)

    Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina

    2008-12-01

    This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.

  8. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  9. Practical Techniques for Modeling Gas Turbine Engine Performance

    Science.gov (United States)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  10. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  11. New techniques and models for assessing ischemic heart disease risks

    Directory of Open Access Journals (Sweden)

    I.N. Yakovina

    2017-09-01

    Full Text Available The paper focuses on tasks of creating and implementing a new technique aimed at assessing ischemic heart diseases risk. The techniques is based on a laboratory-diagnostic complex which includes oxidative, lipid-lipoprotein, inflammatory and metabolic biochemical parameters; s system of logic-mathematic models used for obtaining numeric risk assessments; and a program module which allows to calculate and analyze the results. we justified our models in the course of our re-search which included 172 patients suffering from ischemic heart diseases (IHD combined with coronary atherosclerosis verified by coronary arteriography and 167 patients who didn't have ischemic heart diseases. Our research program in-cluded demographic and social data, questioning on tobacco and alcohol addiction, questioning about dietary habits, chronic diseases case history and medications intake, cardiologic questioning as per Rose, anthropometry, 3-times meas-ured blood pressure, spirometry, and electrocardiogram taking and recording with decoding as per Minnesota code. We detected biochemical parameters of each patient and adjusted our task of creating techniques and models for assessing ischemic heart disease risks on the basis of inflammatory, oxidative, and lipid biological markers. We created a system of logic and mathematic models which is a universal scheme for laboratory parameters processing allowing for dissimilar data specificity. The system of models is universal, but a diagnostic approach to applied biochemical parameters is spe-cific. The created program module (calculator helps a physician to obtain a result on the basis of laboratory research data; the result characterizes numeric risks of coronary atherosclerosis and ischemic heart disease for a patient. It also allows to obtain a visual image of a system of parameters and their deviation from a conditional «standard – pathology» boundary. The complex is implemented into practice by the Scientific

  12. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    Science.gov (United States)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  13. Fractured reservoir history matching improved based on artificial intelligent

    Directory of Open Access Journals (Sweden)

    Sayyed Hadi Riazi

    2016-12-01

    Full Text Available In this paper, a new robust approach based on Least Square Support Vector Machine (LSSVM as a proxy model is used for an automatic fractured reservoir history matching. The proxy model is made to model the history match objective function (mismatch values based on the history data of the field. This model is then used to minimize the objective function through Particle Swarm Optimization (PSO and Imperialist Competitive Algorithm (ICA. In automatic history matching, sensitive analysis is often performed on full simulation model. In this work, to get new range of the uncertain parameters (matching parameters in which the objective function has a minimum value, sensitivity analysis is also performed on the proxy model. By applying the modified ranges to the optimization methods, optimization of the objective function will be faster and outputs of the optimization methods (matching parameters are produced in less time and with high precision. This procedure leads to matching of history of the field in which a set of reservoir parameters is used. The final sets of parameters are then applied for the full simulation model to validate the technique. The obtained results show that the present procedure in this work is effective for history matching process due to its robust dependability and fast convergence speed. Due to high speed and need for small data sets, LSSVM is the best tool to build a proxy model. Also the comparison of PSO and ICA shows that PSO is less time-consuming and more effective.

  14. Using Medical History Embedded in Biometrics Medical Card for User Identity Authentication: Privacy Preserving Authentication Model by Features Matching

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2012-01-01

    Full Text Available Many forms of biometrics have been proposed and studied for biometrics authentication. Recently researchers are looking into longitudinal pattern matching that based on more than just a singular biometrics; data from user’s activities are used to characterise the identity of a user. In this paper we advocate a novel type of authentication by using a user’s medical history which can be electronically stored in a biometric security card. This is a sequel paper from our previous work about defining abstract format of medical data to be queried and tested upon authentication. The challenge to overcome is preserving the user’s privacy by choosing only the useful features from the medical data for use in authentication. The features should contain less sensitive elements and they are implicitly related to the target illness. Therefore exchanging questions and answers about a few carefully chosen features in an open channel would not easily or directly expose the illness, but yet it can verify by inference whether the user has a record of it stored in his smart card. The design of a privacy preserving model by backward inference is introduced in this paper. Some live medical data are used in experiments for validation and demonstration.

  15. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  16. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  17. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  18. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  19. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  20. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Directory of Open Access Journals (Sweden)

    Yang-Cheng Lin

    2012-01-01

    Full Text Available How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers’ perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique, and neural networks (the nonlinear modeling technique to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers’ perception of product image and product form elements of personal digital assistants (PDAs. The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  1. Targeted Therapy Database (TTD): a model to match patient's molecular profile with current knowledge on cancer biology.

    Science.gov (United States)

    Mocellin, Simone; Shrager, Jeff; Scolyer, Richard; Pasquali, Sandro; Verdi, Daunia; Marincola, Francesco M; Briarava, Marta; Gobbel, Randy; Rossi, Carlo; Nitti, Donato

    2010-08-10

    The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. We created a manually annotated database (Targeted Therapy Database, TTD) where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method developed to fully exploit the available knowledge on cancer biology with the

  2. Valorisation of urban elements through 3D models generated from image matching point clouds and augmented reality visualization based in mobile platforms

    Science.gov (United States)

    Marques, Luís.; Roca Cladera, Josep; Tenedório, José António

    2017-10-01

    The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.

  3. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    International Nuclear Information System (INIS)

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-01-01

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods

  4. Modelling galaxy formation with multi-scale techniques

    International Nuclear Information System (INIS)

    Hobbs, A.

    2011-01-01

    Full text: Galaxy formation and evolution depends on a wide variety of physical processes - star formation, gas cooling, supernovae explosions and stellar winds etc. - that span an enormous range of physical scales. We present a novel technique for modelling such massively multiscale systems. This has two key new elements: Lagrangian re simulation, and convergent 'sub-grid' physics. The former allows us to hone in on interesting simulation regions with very high resolution. The latter allows us to increase resolution for the physics that we can resolve, without unresolved physics spoiling convergence. We illustrate the power of our new approach by showing some new results for star formation in the Milky Way. (author)

  5. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...... steady atmospheric wind shear profile with and without wind direction changes up through the atmospheric boundary layer. Results show that the main impact on the turbine is captured by the model. Analysis of the wake behind the wind turbine, reveal the formation of a skewed wake geometry interacting...

  6. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  7. Matching Games with Additive Externalities

    DEFF Research Database (Denmark)

    Branzei, Simina; Michalak, Tomasz; Rahwan, Talal

    2012-01-01

    Two-sided matchings are an important theoretical tool used to model markets and social interactions. In many real life problems the utility of an agent is influenced not only by their own choices, but also by the choices that other agents make. Such an influence is called an externality. Whereas...... fully expressive representations of externalities in matchings require exponential space, in this paper we propose a compact model of externalities, in which the influence of a match on each agent is computed additively. In this framework, we analyze many-to-many and one-to-one matchings under neutral...

  8. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  9. Advanced applications of numerical modelling techniques for clay extruder design

    Science.gov (United States)

    Kandasamy, Saravanakumar

    Ceramic materials play a vital role in our day to day life. Recent advances in research, manufacture and processing techniques and production methodologies have broadened the scope of ceramic products such as bricks, pipes and tiles, especially in the construction industry. These are mainly manufactured using an extrusion process in auger extruders. During their long history of application in the ceramic industry, most of the design developments of extruder systems have resulted from expensive laboratory-based experimental work and field-based trial and error runs. In spite of these design developments, the auger extruders continue to be energy intensive devices with high operating costs. Limited understanding of the physical process involved in the process and the cost and time requirements of lab-based experiments were found to be the major obstacles in the further development of auger extruders.An attempt has been made herein to use Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) based numerical modelling techniques to reduce the costs and time associated with research into design improvement by experimental trials. These two techniques, although used widely in other engineering applications, have rarely been applied for auger extruder development. This had been due to a number of reasons including technical limitations of CFD tools previously available. Modern CFD and FEA software packages have much enhanced capabilities and allow the modelling of the flow of complex fluids such as clay.This research work presents a methodology in using Herschel-Bulkley's fluid flow based CFD model to simulate and assess the flow of clay-water mixture through the extruder and the die of a vacuum de-airing type clay extrusion unit used in ceramic extrusion. The extruder design and the operating parameters were varied to study their influence on the power consumption and the extrusion pressure. The model results were then validated using results from

  10. Biological modelling of pelvic radiotherapy. Potential gains from conformal techniques

    Energy Technology Data Exchange (ETDEWEB)

    Fenwick, J.D

    1999-07-01

    Models have been developed which describe the dose and volume dependences of various long-term rectal complications of radiotherapy; assumptions underlying the models are consistent with clinical and experimental descriptions of complication pathogenesis. In particular, rectal bleeding - perhaps the most common complication of modern external beam prostate radiotherapy, and which might be viewed as its principle dose-limiting toxicity - has been modelled as a parallel-type complication. Rectal dose-surface-histograms have been calculated for 79 patients treated, in the course of the Royal Marsden trial of pelvic conformal radiotherapy, for prostate cancer using conformal or conventional techniques; rectal bleeding data is also available for these patients. The maximum- likelihood fit of the parallel bleeding model to the dose-surface-histograms and complication data shows that the complication status of the patients analysed (most of whom received reference point doses of 64 Gy) was significantly dependent on, and almost linearly proportional to, the volume of highly dosed rectal wall: a 1% decrease in the fraction of rectal wall (outlined over an 11 cm rectal length) receiving a dose of 58 Gy or more lead to a reduction in the (RTOG) grade 1,2,3 bleeding rate of about 1.1% - 95% confidence interval [0.04%, 2.2%]. The parallel model fit to the bleeding data is only marginally biased by uncertainties in the calculated dose-surface-histograms (due to setup errors, rectal wall movement and absolute rectal surface area variability), causing the gradient of the observed volume-response curve to be slightly lower than that which would be seen in the absence of these uncertainties. An analysis of published complication data supports these single-centre findings and indicates that the reductions in highly dosed rectal wall volumes obtainable using conformal radiotherapy techniques can be exploited to allow escalation of the dose delivered to the prostate target volume, the

  11. Universal or Specific? A Modeling-Based Comparison of Broad-Spectrum Influenza Vaccines against Conventional, Strain-Matched Vaccines.

    Directory of Open Access Journals (Sweden)

    Rahul Subramanian

    2016-12-01

    Full Text Available Despite the availability of vaccines, influenza remains a major public health challenge. A key reason is the virus capacity for immune escape: ongoing evolution allows the continual circulation of seasonal influenza, while novel influenza viruses invade the human population to cause a pandemic every few decades. Current vaccines have to be updated continually to keep up to date with this antigenic change, but emerging 'universal' vaccines-targeting more conserved components of the influenza virus-offer the potential to act across all influenza A strains and subtypes. Influenza vaccination programmes around the world are steadily increasing in their population coverage. In future, how might intensive, routine immunization with novel vaccines compare against similar mass programmes utilizing conventional vaccines? Specifically, how might novel and conventional vaccines compare, in terms of cumulative incidence and rates of antigenic evolution of seasonal influenza? What are their potential implications for the impact of pandemic emergence? Here we present a new mathematical model, capturing both transmission dynamics and antigenic evolution of influenza in a simple framework, to explore these questions. We find that, even when matched by per-dose efficacy, universal vaccines could dampen population-level transmission over several seasons to a greater extent than conventional vaccines. Moreover, by lowering opportunities for cross-protective immunity in the population, conventional vaccines could allow the increased spread of a novel pandemic strain. Conversely, universal vaccines could mitigate both seasonal and pandemic spread. However, where it is not possible to maintain annual, intensive vaccination coverage, the duration and breadth of immunity raised by universal vaccines are critical determinants of their performance relative to conventional vaccines. In future, conventional and novel vaccines are likely to play complementary roles in

  12. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  13. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  14. Optic nerve sheath diameter measurement techniques: examination using a novel ex-vivo porcine model.

    Science.gov (United States)

    Nusbaum, Derek M; Antonsen, Erik; Bockhorst, Kurt H; Easley, R Blaine; Clark, Jonathan B; Brady, Kenneth M; Kibler, Kathleen K; Sutton, Jeffrey P; Kramer, Larry; Sargsyan, Ashot E

    2014-01-01

    Ultrasound (U/S) and MRI measurements of the optic nerve sheath diameter (ONSD) have been proposed as intracranial pressure measurement surrogates, but these methods have not been fully evaluated or standardized. The purpose of this study was to develop an ex-vivo model for evaluating ONSD measurement techniques by comparing U/S and MRI measurements to physical measurements. The left eye of post mortem juvenile pigs (N = 3) was excised and the subdural space of the optic nerve cannulated. Caliper measurements and U/S imaging measurements of the ONSD were acquired at baseline and following 1 cc saline infusion into the sheath. The samples were then embedded in 0.5% agarose and imaged in a 7 Tesla (7T) MRI. The ONSD was subsequently measured with digital calipers at locations and directions matching the U/S and direct measurements. Both MRI and sonographic measurements were in agreement with direct measurements. U/S data, especially axial images, exhibited a positive bias and more variance (bias: 1.318, 95% limit of agreement: 8.609) compared to MRI (bias: 0.3156, 95% limit of agreement: 2.773). In addition, U/S images were much more dependent on probe placement, distance between probe and target, and imaging plane. This model appears to be a valid test-bed for continued scrutiny of ONSD measurement techniques. In this model, 7T MRI was accurate and potentially useful for in-vivo measurements where direct measurements are not available. Current limitations with ultrasound imaging for ONSD measurement associated with image acquisition technique and equipment necessitate further standardization to improve its clinical utility.

  15. Targeted Therapy Database (TTD: a model to match patient's molecular profile with current knowledge on cancer biology.

    Directory of Open Access Journals (Sweden)

    Simone Mocellin

    Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method

  16. Modeling and Forecasting Electricity Demand in Azerbaijan Using Cointegration Techniques

    Directory of Open Access Journals (Sweden)

    Fakhri J. Hasanov

    2016-12-01

    Full Text Available Policymakers in developing and transitional economies require sound models to: (i understand the drivers of rapidly growing energy consumption and (ii produce forecasts of future energy demand. This paper attempts to model electricity demand in Azerbaijan and provide future forecast scenarios—as far as we are aware this is the first such attempt for Azerbaijan using a comprehensive modelling framework. Electricity consumption increased and decreased considerably in Azerbaijan from 1995 to 2013 (the period used for the empirical analysis—it increased on average by about 4% per annum from 1995 to 2006 but decreased by about 4½% per annum from 2006 to 2010 and increased thereafter. It is therefore vital that Azerbaijani planners and policymakers understand what drives electricity demand and be able to forecast how it will grow in order to plan for future power production. However, modeling electricity demand for such a country has many challenges. Azerbaijan is rich in energy resources, consequently GDP is heavily influenced by oil prices; hence, real non-oil GDP is employed as the activity driver in this research (unlike almost all previous aggregate energy demand studies. Moreover, electricity prices are administered rather than market driven. Therefore, different cointegration and error correction techniques are employed to estimate a number of per capita electricity demand models for Azerbaijan, which are used to produce forecast scenarios for up to 2025. The resulting estimated models (in terms of coefficients, etc. and forecasts of electricity demand for Azerbaijan in 2025 prove to be very similar; with the Business as Usual forecast ranging from about of 19½ to 21 TWh.

  17. Study of hydrogen-molecule guests in type II clathrate hydrates using a force-matched potential model parameterised from ab initio molecular dynamics

    Science.gov (United States)

    Burnham, Christian J.; Futera, Zdenek; English, Niall J.

    2018-03-01

    The force-matching method has been applied to parameterise an empirical potential model for water-water and water-hydrogen intermolecular interactions for use in clathrate-hydrate simulations containing hydrogen guest molecules. The underlying reference simulations constituted ab initio molecular dynamics (AIMD) of clathrate hydrates with various occupations of hydrogen-molecule guests. It is shown that the resultant model is able to reproduce AIMD-derived free-energy curves for the movement of a tagged hydrogen molecule between the water cages that make up the clathrate, thus giving us confidence in the model. Furthermore, with the aid of an umbrella-sampling algorithm, we calculate barrier heights for the force-matched model, yielding the free-energy barrier for a tagged molecule to move between cages. The barrier heights are reasonably large, being on the order of 30 kJ/mol, and are consistent with our previous studies with empirical models [C. J. Burnham and N. J. English, J. Phys. Chem. C 120, 16561 (2016) and C. J. Burnham et al., Phys. Chem. Chem. Phys. 19, 717 (2017)]. Our results are in opposition to the literature, which claims that this system may have very low barrier heights. We also compare results to that using the more ad hoc empirical model of Alavi et al. [J. Chem. Phys. 123, 024507 (2005)] and find that this model does very well when judged against the force-matched and ab initio simulation data.

  18. Sample size evaluation for a multiply matched case-control study using the score test from a conditional logistic (discrete Cox PH) regression model.

    Science.gov (United States)

    Lachin, John M

    2008-06-30

    The conditional logistic regression model (Biometrics 1982; 38:661-672) provides a convenient method for the assessment of qualitative or quantitative covariate effects on risk in a study with matched sets, each containing a possibly different number of cases and controls. The conditional logistic likelihood is identical to the stratified Cox proportional hazards model likelihood, with an adjustment for ties (J. R. Stat. Soc. B 1972; 34:187-220). This likelihood also applies to a nested case-control study with multiply matched cases and controls, selected from those at risk at selected event times. Herein the distribution of the score test for the effect of a covariate in the model is used to derive simple equations to describe the power of the test to detect a coefficient theta (log odds ratio or log hazard ratio) or the number of cases (or matched sets) and controls required to provide a desired level of power. Additional expressions are derived for a quantitative covariate as a function of the difference in the assumed mean covariate values among cases and controls and for a qualitative covariate in terms of the difference in the probabilities of exposure for cases and controls. Examples are presented for a nested case-control study and a multiply matched case-control study.

  19. Probabilistic Matching of Deidentified Data From a Trauma Registry and a Traumatic Brain Injury Model System Center: A Follow-up Validation Study.

    Science.gov (United States)

    Kumar, Raj G; Wang, Zhensheng; Kesinger, Matthew R; Newman, Mark; Huynh, Toan T; Niemeier, Janet P; Sperry, Jason L; Wagner, Amy K

    2018-04-01

    In a previous study, individuals from a single Traumatic Brain Injury Model Systems and trauma center were matched using a novel probabilistic matching algorithm. The Traumatic Brain Injury Model Systems is a multicenter prospective cohort study containing more than 14,000 participants with traumatic brain injury, following them from inpatient rehabilitation to the community over the remainder of their lifetime. The National Trauma Databank is the largest aggregation of trauma data in the United States, including more than 6 million records. Linking these two databases offers a broad range of opportunities to explore research questions not otherwise possible. Our objective was to refine and validate the previous protocol at another independent center. An algorithm generation and validation data set were created, and potential matches were blocked by age, sex, and year of injury; total probabilistic weight was calculated based on of 12 common data fields. Validity metrics were calculated using a minimum probabilistic weight of 3. The positive predictive value was 98.2% and 97.4% and sensitivity was 74.1% and 76.3%, in the algorithm generation and validation set, respectively. These metrics were similar to the previous study. Future work will apply the refined probabilistic matching algorithm to the Traumatic Brain Injury Model Systems and the National Trauma Databank to generate a merged data set for clinical traumatic brain injury research use.

  20. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  1. Pattern recognition and string matching

    CERN Document Server

    Cheng, Xiuzhen

    2002-01-01

    The research and development of pattern recognition have proven to be of importance in science, technology, and human activity. Many useful concepts and tools from different disciplines have been employed in pattern recognition. Among them is string matching, which receives much theoretical and practical attention. String matching is also an important topic in combinatorial optimization. This book is devoted to recent advances in pattern recognition and string matching. It consists of twenty eight chapters written by different authors, addressing a broad range of topics such as those from classifica­ tion, matching, mining, feature selection, and applications. Each chapter is self-contained, and presents either novel methodological approaches or applications of existing theories and techniques. The aim, intent, and motivation for publishing this book is to pro­ vide a reference tool for the increasing number of readers who depend upon pattern recognition or string matching in some way. This includes student...

  2. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  3. Mapping the Complexities of Online Dialogue: An Analytical Modeling Technique

    Directory of Open Access Journals (Sweden)

    Robert Newell

    2014-03-01

    Full Text Available The e-Dialogue platform was developed in 2001 to explore the potential of using the Internet for engaging diverse groups of people and multiple perspectives in substantive dialogue on sustainability. The system is online, text-based, and serves as a transdisciplinary space for bringing together researchers, practitioners, policy-makers and community leaders. The Newell-Dale Conversation Modeling Technique (NDCMT was designed for in-depth analysis of e-Dialogue conversations and uses empirical methodology to minimize observer bias during analysis of a conversation transcript. NDCMT elucidates emergent ideas, identifies connections between ideas and themes, and provides a coherent synthesis and deeper understanding of the underlying patterns of online conversations. Continual application and improvement of NDCMT can lead to powerful methodologies for empirically analyzing digital discourse and better capture of innovations produced through such discourse. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs140221

  4. Vector machine techniques for modeling of seismic liquefaction data

    Directory of Open Access Journals (Sweden)

    Pijush Samui

    2014-06-01

    Full Text Available This article employs three soft computing techniques, Support Vector Machine (SVM; Least Square Support Vector Machine (LSSVM and Relevance Vector Machine (RVM, for prediction of liquefaction susceptibility of soil. SVM and LSSVM are based on the structural risk minimization (SRM principle which seeks to minimize an upper bound of the generalization error consisting of the sum of the training error and a confidence interval. RVM is a sparse Bayesian kernel machine. SVM, LSSVM and RVM have been used as classification tools. The developed SVM, LSSVM and RVM give equations for prediction of liquefaction susceptibility of soil. A comparative study has been carried out between the developed SVM, LSSVM and RVM models. The results from this article indicate that the developed SVM gives the best performance for prediction of liquefaction susceptibility of soil.

  5. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  6. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  7. Image Matching Using Generalized Hough Transforms

    Science.gov (United States)

    Davis, L. S.; Hu, F. P.; Hwang, V.; Kitchen, L.

    1983-01-01

    An image matching system specifically designed to match dissimilar images is described. A set of blobs and ribbons is first extracted from each image, and then generalized Hough transform techniques are used to match these sets and compute the transformation that best registers the image. An example of the application of the approach to one pair of remotely sensed images is presented.

  8. Sample evaluation of ontology-matching systems

    NARCIS (Netherlands)

    Hage, W.R. van; Isaac, A.; Aleksovski, Z.

    2007-01-01

    Ontology matching exists to solve practical problems. Hence, methodologies to find and evaluate solutions for ontology matching should be centered on practical problems. In this paper we propose two statistically-founded evaluation techniques to assess ontology-matching performance that are based on

  9. Meningkatkan Aktivitas Belajar Siswa dengan Menggunakan Model Make A Match Pada Mata Pelajaran Matematika di Kelas V SDN 050687 Sawit Seberang

    Directory of Open Access Journals (Sweden)

    Daitin Tarigan

    2014-06-01

    Full Text Available AbstrakPenelitian ini bertujuan untuk mengetahui aktivitas belajar siswa pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal dan sebaliknya dengan menggunakan model make a match di kelas V SD Negeri 050687 Sawit Seberang T.A 2013/2014. Jenis penelitian ini adalah Penelitian Tindakan Kelas (PTK dengan alat pengumpulan data yang digunakan adalah lembar observasi aktivitas guru dan siswa. Berdasarkan analisis data diperoleh hasil pada siklus I Pertemuan I skor aktivitas guru adalah 82,14 dengan kriteria baik dan aktivitas belajar dalah aktif. Tindakan dilanjutkan sampai dengan siklus ke II. Pada pertemuan II siklus II skor aktivitas guru adalah 96,42 dengan kriteria sangat baik dan aktivitas belajar klasikal adalah sangat aktif. Dari hasil tersebut dapat diambil kesimpulan bahwa tindakan penelitian berhasil karena nilai indikator aktivitas belajar siswa dan jumlah siswa yang dinyatakan aktif secara klasikal telah mencapai 80%. Dengan demikian maka penggunaan model make a match dapat meningkatkan aktivitas belajar siswa di kelas V SD Negeri 050687 Sawit Seberang pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal. Kata Kunci:      Model Make a Match; Aktivitas Belajar Siswa  AbstractThis reseach aim is to know the student activity on Math at topic change the fraction into percent, desimal and vice versa, using make a match model on fifth grade of SDN 050687 Sawit Seberang 2013/2014. This is a classroom action research which is used activity observrvation sheet as its instrumen of collecting data. From the analisys of data, it is got result as follows: on cycle I meet I, teacher activity score is 82,14, which was mean good, and learning activity was active. The action and then continued until second cycle. On the meet II cylce II, it was got teacher activity score is 96,42, which was mean very good, and clasical learning activity was very active. Based on the result, it was conclude

  10. Lichtenstein Versus Total Extraperitoneal Patch Plasty Versus Transabdominal Patch Plasty Technique for Primary Unilateral Inguinal Hernia Repair: A Registry-based, Propensity Score-matched Comparison of 57,906 Patients.

    Science.gov (United States)

    Köckerling, Ferdinand; Bittner, Reinhard; Kofler, Michael; Mayer, Franz; Adolf, Daniela; Kuthe, Andreas; Weyhe, Dirk

    2017-09-26

    Outcome comparison of the Lichtenstein, total extraperitoneal patch plasty (TEP), and transabdominal patch plasty (TAPP) techniques for primary unilateral inguinal hernia repair. For comparison of these techniques the number of cases included in meta-analyses of randomized controlled trials is limited. There is therefore an urgent need for more comparative data. In total, 57,906 patients with a primary unilateral inguinal hernia and 1-year follow up from the Herniamed Registry were selected between September 1, 2009 and February 1, 2015. Using propensity score matching, 12,564 matched pairs were formed for comparison of Lichtenstein versus TEP, 16,375 for Lichtenstein versus TAPP, and 14,426 for TEP versus TAPP. Comparison of Lichtenstein versus TEP revealed disadvantages for the Lichtenstein operation with regard to the postoperative complications (3.4% vs 1.7%; P comparison of Lichtenstein versus TAPP showed disadvantages for the Lichtenstein operation with regard to the postoperative complications (3.8% vs 3.3%; P = 0.029), complication-related reoperations (1.2% vs 0.9%; P = 0.019), pain at rest (5% vs 4.5%; P = 0.029), and on exertion (10.2% vs 7.8%; P < 0.001). TEP and TAPP were found to have advantages over the Lichtenstein operation.

  11. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  12. Establishing Keypoint Matches on Multimodal Images with Bootstrap Strategy and Global Information.

    Science.gov (United States)

    Li, Yong; Jin, Hongbin; Wu, Jiatao; Liu, Jie

    2017-04-19

    This paper proposes an algorithm of building keypoint matches on multimodal images by combining a bootstrap process and global information. The correct ratio of keypoint matches built with descriptors is typically very low on multimodal images of large spectral difference. To identify correct matches, global information is utilized for evaluating keypoint matches and a bootstrap technique is employed to reduce the computational cost. A keypoint match determines a transformation T and a similarity metric between the reference and the transformed test image by T. The similarity metric encodes global information over entire images, and hence a higher similarity indicates the match can bring more image content into alignment, implying it tends to be correct. Unfortunately, exhausting triplets/quadruples of matches for affine/projective transformation is computationally intractable when the number of keypoints is large. To reduce the computational cost, a bootstrap technique is employed that starts from single matches for a translation and rotation model, and goes increasingly to quadruples of four matches for a projective model. The global information screens for "good" matches at each stage and the bootstrap strategy makes the screening process computationally feasible. Experimental results show that the proposed method can establish reliable keypoint matches on challenging multimodal images of strong multimodality.

  13. Outsourced pattern matching

    DEFF Research Database (Denmark)

    Faust, Sebastian; Hazay, Carmit; Venturi, Daniele

    2013-01-01

    In secure delegatable computation, computationally weak devices (or clients) wish to outsource their computation and data to an untrusted server in the cloud. While most earlier work considers the general question of how to securely outsource any computation to the cloud server, we focus...... and the client C T in order to learn the positions at which a pattern of length m matches the text (and nothing beyond that). This is called the outsourced pattern matching problem and is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases...... that contain confidential data (e.g., health related data about patient history). Our constructions offer simulation-based security in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to O(m) bits plus the number of occurrences...

  14. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  15. Toward Practical Secure Stable Matching

    Directory of Open Access Journals (Sweden)

    Riazi M. Sadegh

    2017-01-01

    Full Text Available The Stable Matching (SM algorithm has been deployed in many real-world scenarios including the National Residency Matching Program (NRMP and financial applications such as matching of suppliers and consumers in capital markets. Since these applications typically involve highly sensitive information such as the underlying preference lists, their current implementations rely on trusted third parties. This paper introduces the first provably secure and scalable implementation of SM based on Yao’s garbled circuit protocol and Oblivious RAM (ORAM. Our scheme can securely compute a stable match for 8k pairs four orders of magnitude faster than the previously best known method. We achieve this by introducing a compact and efficient sub-linear size circuit. We even further decrease the computation cost by three orders of magnitude by proposing a novel technique to avoid unnecessary iterations in the SM algorithm. We evaluate our implementation for several problem sizes and plan to publish it as open-source.

  16. REDUCING UNCERTAINTIES IN MODEL PREDICTIONS VIA HISTORY MATCHING OF CO2 MIGRATION AND REACTIVE TRANSPORT MODELING OF CO2 FATE AT THE SLEIPNER PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Chen

    2015-03-31

    An important question for the Carbon Capture, Storage, and Utility program is “can we adequately predict the CO2 plume migration?” For tracking CO2 plume development, the Sleipner project in the Norwegian North Sea provides more time-lapse seismic monitoring data than any other sites, but significant uncertainties still exist for some of the reservoir parameters. In Part I, we assessed model uncertainties by applying two multi-phase compositional simulators to the Sleipner Benchmark model for the uppermost layer (Layer 9) of the Utsira Sand and calibrated our model against the time-lapsed seismic monitoring data for the site from 1999 to 2010. Approximate match with the observed plume was achieved by introducing lateral permeability anisotropy, adding CH4 into the CO2 stream, and adjusting the reservoir temperatures. Model-predicted gas saturation, CO2 accumulation thickness, and CO2 solubility in brine—none were used as calibration metrics—were all comparable with the interpretations of the seismic data in the literature. In Part II & III, we evaluated the uncertainties of predicted long-term CO2 fate up to 10,000 years, due to uncertain reaction kinetics. Under four scenarios of the kinetic rate laws, the temporal and spatial evolution of CO2 partitioning into the four trapping mechanisms (hydrodynamic/structural, solubility, residual/capillary, and mineral) was simulated with ToughReact, taking into account the CO2-brine-rock reactions and the multi-phase reactive flow and mass transport. Modeling results show that different rate laws for mineral dissolution and precipitation reactions resulted in different predicted amounts of trapped CO2 by carbonate minerals, with scenarios of the conventional linear rate law for feldspar dissolution having twice as much mineral trapping (21% of the injected CO2) as scenarios with a Burch-type or Alekseyev et al.–type rate law for feldspar dissolution (11%). So far, most reactive transport modeling (RTM) studies for

  17. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    Science.gov (United States)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (processing IPHAS data is 25 s.

  18. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    Science.gov (United States)

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  19. A Critical Review of Model-Based Economic Studies of Depression: Modelling Techniques, Model Structure and Data Sources

    OpenAIRE

    Hossein Haji Ali Afzali; Jonathan Karnon; Jodi Gray

    2012-01-01

    Depression is the most common mental health disorder and is recognized as a chronic disease characterized by multiple acute episodes/relapses. Although modelling techniques play an increasingly important role in the economic evaluation of depression interventions, comparatively little attention has been paid to issues around modelling studies with a focus on potential biases. This, however, is important as different modelling approaches, variations in model structure and input parameters may ...

  20. Matching of motor-sensory modality in the rodent femoral nerve model shows no enhanced effect on peripheral nerve regeneration

    Science.gov (United States)

    Kawamura, David H.; Johnson, Philip J.; Moore, Amy M.; Magill, Christina K.; Hunter, Daniel A.; Ray, Wilson Z.; Tung, Thomas HH.; Mackinnon, Susan E.

    2010-01-01

    The treatment of peripheral nerve injuries with nerve gaps largely consists of autologous nerve grafting utilizing sensory nerve donors. Underlying this clinical practice is the assumption that sensory autografts provide a suitable substrate for motoneuron regeneration, thereby facilitating motor endplate reinnervation and functional recovery. This study examined the role of nerve graft modality on axonal regeneration, comparing motor nerve regeneration through motor, sensory, and mixed nerve isografts in the Lewis rat. A total of 100 rats underwent grafting of the motor or sensory branch of the femoral nerve with histomorphometric analysis performed after 5, 6, or 7 weeks. Analysis demonstrated similar nerve regeneration in motor, sensory, and mixed nerve grafts at all three time points. These data indicate that matching of motor-sensory modality in the rat femoral nerve does not confer improved axonal regeneration through nerve isografts. PMID:20122927

  1. The effects of soil-structure interaction modeling techniques on in-structure response spectra

    International Nuclear Information System (INIS)

    Johnson, J.J.; Wesley, D.A.; Almajan, I.T.

    1977-01-01

    The structure considered for this investigation consisted of the reactor containment building (RCB) and prestressed concrete reactor vessel (PCRV) for a HTGR plant. A conventional lumped-mass dynamic model in three dimensions was used in the study. The horizontal and vertical response, which are uncoupled due to the symmetry of the structure, were determined for horizontal and vertical excitation. Five different site conditions ranging from competent rock to a soft soil site were considered. The simplified approach to the overall plant analysis utilized stiffness proportional composite damping with a limited amount of soil damping consistent with US NRC regulatory guidelines. Selected cases were also analyzed assuming a soil damping value approximating the theoretical value. The results from the simplified approach were compared to those determined by rigorously coupling the structure to a frequency independent half-space representation of the soil. Finally, equivalent modal damping ratios were found by matching the frequency response at a point within the coupled soil-structure system determined by solution of the coupled and uncoupled equations of motion. The basis for comparison of the aforementioned techniques was the response spectra at selected locations within the soil-structure system. Each of the five site conditions was analyzed and in-structure response spectra were generated. The response spectra were combined to form a design envelope which encompasses the entire range of site parameters. Both the design envelopes and the site-by-site results were compared

  2. Adaptive Atmospheric Modeling Key Techniques in Grid Generation, Data Structures, and Numerical Operations with Applications

    CERN Document Server

    Behrens, Jörn

    2006-01-01

    Gives an overview and guidance in the development of adaptive techniques for atmospheric modeling. This book covers paradigms of adaptive techniques, such as error estimation and adaptation criteria. Considering applications, it demonstrates several techniques for discretizing relevant conservation laws from atmospheric modeling.

  3. Single-incision laparoscopic surgery using colon-lifting technique for colorectal cancer: a matched case-control comparison with standard multiport laparoscopic surgery in terms of short-term results and access instrument cost.

    Science.gov (United States)

    Fujii, Shoichi; Watanabe, Kazuteru; Ota, Mitsuyoshi; Watanabe, Jun; Ichikawa, Yasushi; Yamagishi, Shigeru; Tatsumi, Kenji; Suwa, Hirokazu; Kunisaki, Chikara; Taguri, Masataka; Morita, Satoshi; Endo, Itaru

    2012-05-01

    Single-incision laparoscopic surgery (SILS) has been used for colorectal cancer as a minimally invasive procedure. However, there are still difficulties concerning effective triangulation and countertraction. The study's purpose was to clarify the usefulness of the colon-lifting technique (CLT) in SILS for colorectal cancer. SILS was performed for cancer (cT2N0 or less) of the right-sided colon (near the ileocecum), sigmoid, or rectosigmoid. The SILS™ Port was used for transumbilical access. A suture string was inserted through the abdominal wall and passed through the mesocolon. The colon was retracted anteriorly and fixed to the abdominal wall. The main mesenteric vessels were placed under tension. Lymph node dissection was performed by medial approach. Short-term surgical outcomes and access port costs were compared between SILS (using CLT) and the standard multiport technique (MPT). The two groups were case-matched by propensity scoring. Analyzed variables included preoperative Dukes stage and tumor location. From June 2009 to April 2011, 27 patients underwent SILS, and from April 2005 to April 2011, 85 patients underwent MPT. Propensity scoring generated 23 matched patients per group for SILS versus MPT comparisons. There were no significant differences in operating time, blood loss, early complications, postoperative analgesic frequency, or length of hospital stay. One MPT patient was converted to open surgery (4.5%); no SILS patients were converted. There were no significant differences in the length of distal cut margin and the number of harvested lymph nodes, except incision length (SILS vs. MPT: 33 vs. 55 mm, P Japanese yen, P CLT was safe and effective in providing radical treatment of cT2N0 cancer in the right-sided colon, sigmoid, or rectosigmoid. SILS was advantageous with respect to cosmesis and lower cost of access instruments.

  4. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  5. Constitutional Model and Rationality in Judicial Decisions from Proportionality Technique

    OpenAIRE

    Feio, Thiago Alves

    2016-01-01

    In the current legal systems, the content of the Constitutions consists of values that serve to limit state action. The department in charge of the control of this system is, usually, the Judiciary. This choice leads to two major problems, the tension between democracy and constitutionalism and the subjectivity that control. One of the solutions to subjectivity is weighting of principles through the proportionality technique, which aims to give rational decisions. This technique doesn’t elimi...

  6. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  7. Pixel Decimation in Block Matching Techniques

    Directory of Open Access Journals (Sweden)

    D. Levicky

    2000-12-01

    Full Text Available Block motion estimation using full search algorithm is computationallyextensive. Previously proposed fast algorithms reduce the computationcost by limiting the number of locations searched. In this paper wepresent algorithms for block motion estimation that produce similarperformance to that full search algorithm. The algorithms are based onthe pixel decimation.

  8. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  9. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  10. Modeling technique for the process of liquid film disintegration

    Science.gov (United States)

    Modorskii, V. Ya.; Sipatov, A. M.; Babushkina, A. V.; Kolodyazhny, D. Yu.; Nagorny, V. S.

    2016-10-01

    In the course of numerical experiments the method of calculation of two-phase flows was developed by solving a model problem. The results of the study were compared between the two models that describe the processes of two-phase flow and the collapse of the liquid jet into droplets. VoF model and model QMOM - two mathematical models were considered the implementation of the spray.

  11. Analysis of strictly bound modes in photonic crystal fibers by use of a source-model technique.

    Science.gov (United States)

    Hochman, Amit; Leviatan, Yehuda

    2004-06-01

    We describe a source-model technique for the analysis of the strictly bound modes propagating in photonic crystal fibers that have a finite photonic bandgap crystal cladding and are surrounded by an air jacket. In this model the field is simulated by a superposition of fields of fictitious electric and magnetic current filaments, suitably placed near the media interfaces of the fiber. A simple point-matching procedure is subsequently used to enforce the continuity conditions across the interfaces, leading to a homogeneous matrix equation. Nontrivial solutions to this equation yield the mode field patterns and propagation constants. As an example, we analyze a hollow-core photonic crystal fiber. Symmetry characteristics of the modes are discussed and exploited to reduce the computational burden.

  12. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  13. Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array

    Directory of Open Access Journals (Sweden)

    Park Sungchan

    2011-01-01

    Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.

  14. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  15. Robust weighted scan matching with quadtrees

    NARCIS (Netherlands)

    Visser, A.; Slamet, B.A.; Pfingsthorn, M.

    2009-01-01

    This paper presents the improvement of the robustness and accuracy of the weighted scan matching algorithm matching against the union of earlier acquired scans. The approach allows to reduce the correspondence error, which is explicitly modeled in the weighted scan matching algorithm, by providing a

  16. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  17. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...

  18. (NHIS) using data mining technique as a statistical model

    African Journals Online (AJOL)

    kofi.mereku

    2014-05-23

    May 23, 2014 ... Scheme (NHIS) claims in the Awutu-Effutu-Senya District using data mining techniques, with a specific focus on .... transform them into a format that is friendly to data mining algorithms, such as .... many groups to access the data, facilitate updating the data, and improve the efficiency of checking the data for ...

  19. A Novel Model on DST-Induced Transplantation Tolerance by the Transfer of Self-Specific Donor tTregs to a Haplotype-Matched Organ Recipient

    DEFF Research Database (Denmark)

    Gregoriussen, Angelica Maria Mohr; Bohr, Henrik Georg

    2017-01-01

    Donor-specific blood transfusion (DST) can lead to significant prolongation of allograft survival in experimental animal models and sometimes human recipients of solid organs. The mechanisms responsible for the beneficial effect on graft survival have been a topic of research and debate for decades...... during the course of tolerance induction. Based on the immunological status of the recipients, we suggest that one H2-haplotype-matched self-specific Tregs derived from the transfusion blood can be activated and multiply in the host by binding to antigen-presenting cells presenting allopeptides...

  20. Use of System Dynamics Techniques in the Garrison Health Modelling Tool

    Science.gov (United States)

    2010-11-01

    Joint Health Command (JHC) tasked DSTO to develop techniques for modelling Defence health service delivery both in a Garrison environment in Australia ...UNCLASSIFIED UNCLASSIFIED Use of System Dynamics Techniques in the Garrison Health Modelling Tool Mark Burnett, Kerry Clifford and...Garrison Health Modelling Tool, a prototype software package designed to provide decision-support to JHC health officers and managers in a garrison

  1. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  2. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  3. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  4. Stinging Insect Matching Game

    Science.gov (United States)

    ... for Kids ▸ Stinging Insect Matching Game Share | Stinging Insect Matching Game Stinging insects can ruin summer fun for those who are ... the difference between the different kinds of stinging insects in order to keep your summer safe and ...

  5. Alternative Matching Scores to Control Type I Error of the Mantel-Haenszel Procedure for DIF in Dichotomously Scored Items Conforming to 3PL IRT and Nonparametric 4PBCB Models

    Science.gov (United States)

    Monahan, Patrick O.; Ankenmann, Robert D.

    2010-01-01

    When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…

  6. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    Science.gov (United States)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  7. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  8. Plasma focus matching conditions

    International Nuclear Information System (INIS)

    Soliman, H.M.; Masoud, M.M.; Elkhalafawy, T.A.

    1988-01-01

    A snow-plough and slug models have been used to obtain the optimum matching conditions of the plasma in the focus. The dimensions of the plasma focus device are, inner electrode radius = 2 cm, outer electrode radius = 5.5 cm, and its length = 8 cm. It was found that the maximum magnetic energy of 12.26 kJ has to be delivered to plasma focus whose density is 10 19 /cm 3 at focusing time of 2.55 μs and with total external inductance of 24.2 n H. The same method is used to evaluate the optimum matching conditions for the previous coaxial discharge system which had inner electrode radius = 1.6 cm, outer electrode radius = 3.3 cm and its length = 31.5 cm. These conditions are charging voltage = 12 kV, capacity of the condenser bank = 430 μf, plasma focus density = 10 19 /cm 3 focusing time = 8 μs and total external inductance = 60.32 n H.3 fig., 2 tab

  9. Models, Web-Based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance

    National Research Council Canada - National Science Library

    Hill, Raymond

    2001-01-01

    ... Laboratory, Logistics Research Division, Logistics Readiness Branch to propose a research agenda entitled, "Models, Web-based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance...

  10. Modelling Data Mining Dynamic Code Attributes with Scheme Definition Technique

    OpenAIRE

    Sipayung, Evasaria M; Fiarni, Cut; Tanudjaja, Randy

    2014-01-01

    Data mining is a technique used in differentdisciplines to search for significant relationships among variablesin large data sets. One of the important steps on data mining isdata preparation. On these step, we need to transform complexdata with more than one attributes into representative format fordata mining algorithm. In this study, we concentrated on thedesigning a proposed system to fetch attributes from a complexdata such as product ID. Then the proposed system willdetermine the basic ...

  11. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  12. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  13. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  14. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  15. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  16. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  17. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  18. PENGARUH PENGGUNAAN MODEL COOPERATIVE LEARNING TYPE MAKE-A-MATCH DENGAN MEDIA EVIDENCE CARD TERHADAP HASIL BELAJAR KIMIA MATERI ASAM-BASA

    Directory of Open Access Journals (Sweden)

    D. Noviyanti

    2016-07-01

    Full Text Available Penelitian ini bertujuan untuk mengetahui pengaruh penggunaan model cooperative learning type make-a-match dengan media evidence card terhadap hasil belajar kimia materi larutan asam-basa. Populasi penelitian adalah siswa kelas XI IPA. Pengambilan sampel dilakukan dengan sampling jenuh dengan XIA1 sebagai kelas kontrol dan XIA2 sebagai kelas eksperimen. Berdasarkan uji t dua pihak dihasilkan thitung(4,0293> t1abe1(1,9925 yang berarti ada perbedaan yang signifikan, sedangkan uji t satu pihak kanan thitun9(4,0293>t1abe1(1,9925 yang berarti rata-rata hasil belajar kognitif kelas eksperimen lebih baik daripada kelas kontrol. N-gain kelas eksperimen (0,71 lebih baik daripada kelas kontrol (0,52. Hal ini menunjukkan bahwa penggunakan model Coperative Learning type make-a-match dengan media evidence card mempengaruhi hasil belajar kimia materi larutan asam-basa sebesar 28,99% sehingga guru hendaknya berupaya menerapkan model pembelajaran ini pada materi lain.The study was aimed to determine the effect of the use of cooperative learning model of type make-a-card match with the evidence card media on chemistry learning outcomes of acid-base solution. The population was students of Science class XI. Sampling was performed by saturated sampling with XIA 1 as the control and XIA2 as the experiment class. The two sides t test produced tcount (4.0293> Trable (1.9925 which means there was a significant difference, while the one right side t test produced tcount (4.0293> T Table (1.9925, which means that the average cognitive learning outcome of the experimental class was better than that of the control class. N-gain of the experimental class (0.71 was better than that of the control class (0.52. This showed that the use of this model affected the learning outcomes of acid-base solution of 28.99%, so teachers should try to apply this learning model on another matter.

  19. Matching Systems for Refugees

    Directory of Open Access Journals (Sweden)

    Will Jones

    2017-08-01

    Full Text Available Design of matching systems between refugees and states or local areas is emerging as one of the most promising solutions to problems in refugee resettlement. We describe the basics of two-sided matching theory used in a number of allocation problems, such as school choice, where both sides need to agree to the match. We then explain how these insights can be applied to international refugee matching in the context of the European Union and examine how refugee matching might work within the United Kingdom, Canada, and the United States.

  20. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    Science.gov (United States)

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  1. Matching ERS scatterometer based soil moisture patterns with simulations of a conceptual dual layer hydrologic model over Austria

    Directory of Open Access Journals (Sweden)

    J. Parajka

    2009-02-01

    Full Text Available This study compares ERS scatterometer top soil moisture observations with simulations of a dual layer conceptual hydrologic model. The comparison is performed for 148 Austrian catchments in the period 1991–2000. On average, about 5 to 7 scatterometer images per month with a mean spatial coverage of about 37% are available. The results indicate that the agreement between the two top soil moisture estimates changes with the season and the weight given to the scatterometer in hydrologic model calibration. The hydrologic model generally simulates larger top soil moisture values than are observed by the scatterometer. The differences tend to be smaller for lower altitudes and the winter season. The average correlation between the two estimates is more than 0.5 in the period from July to October, and about 0.2 in the winter months, depending on the period and calibration setting. Using both ERS scatterometer based soil moisture and runoff for model calibration provides more robust model parameters than using either of these two sources of information.

  2. Matching ERS scatterometer based soil moisture patterns with simulations of a conceptual dual layer hydrologic model over Austria

    Science.gov (United States)

    Parajka, J.; Naeimi, V.; Blöschl, G.; Komma, J.

    2009-02-01

    This study compares ERS scatterometer top soil moisture observations with simulations of a dual layer conceptual hydrologic model. The comparison is performed for 148 Austrian catchments in the period 1991-2000. On average, about 5 to 7 scatterometer images per month with a mean spatial coverage of about 37% are available. The results indicate that the agreement between the two top soil moisture estimates changes with the season and the weight given to the scatterometer in hydrologic model calibration. The hydrologic model generally simulates larger top soil moisture values than are observed by the scatterometer. The differences tend to be smaller for lower altitudes and the winter season. The average correlation between the two estimates is more than 0.5 in the period from July to October, and about 0.2 in the winter months, depending on the period and calibration setting. Using both ERS scatterometer based soil moisture and runoff for model calibration provides more robust model parameters than using either of these two sources of information.

  3. Development of mathematical techniques for the assimilation of remote sensing data into atmospheric models

    International Nuclear Information System (INIS)

    Seinfeld, J.H.

    1982-01-01

    The problem of the assimilation of remote sensing data into mathematical models of atmospheric pollutant species was investigated. The data assimilation problem is posed in terms of the matching of spatially integrated species burden measurements to the predicted three-dimensional concentration fields from atmospheric diffusion models. General conditions were derived for the reconstructability of atmospheric concentration distributions from data typical of remote sensing applications, and a computational algorithm (filter) for the processing of remote sensing data was developed

  4. Matching Students to Schools

    Directory of Open Access Journals (Sweden)

    Dejan Trifunovic

    2017-08-01

    Full Text Available In this paper, we present the problem of matching students to schools by using different matching mechanisms. This market is specific since public schools are free and the price mechanism cannot be used to determine the optimal allocation of children in schools. Therefore, it is necessary to use different matching algorithms that mimic the market mechanism and enable us to determine the core of the cooperative game. In this paper, we will determine that it is possible to apply cooperative game theory in matching problems. This review paper is based on illustrative examples aiming to compare matching algorithms in terms of the incentive compatibility, stability and efficiency of the matching. In this paper we will present some specific problems that may occur in matching, such as improving the quality of schools, favoring minority students, the limited length of the list of preferences and generating strict priorities from weak priorities.

  5. Best matching theory & applications

    CERN Document Server

    Moghaddam, Mohsen

    2017-01-01

    Mismatch or best match? This book demonstrates that best matching of individual entities to each other is essential to ensure smooth conduct and successful competitiveness in any distributed system, natural and artificial. Interactions must be optimized through best matching in planning and scheduling, enterprise network design, transportation and construction planning, recruitment, problem solving, selective assembly, team formation, sensor network design, and more. Fundamentals of best matching in distributed and collaborative systems are explained by providing: § Methodical analysis of various multidimensional best matching processes § Comprehensive taxonomy, comparing different best matching problems and processes § Systematic identification of systems’ hierarchy, nature of interactions, and distribution of decision-making and control functions § Practical formulation of solutions based on a library of best matching algorithms and protocols, ready for direct applications and apps development. Design...

  6. Matching the results of a theoretical model with failure rates obtained from a population of non-nuclear pressure vessels

    International Nuclear Information System (INIS)

    Harrop, L.P.

    1982-02-01

    Failure rates for non-nuclear pressure vessel populations are often regarded as showing a decrease with time. Empirical evidence can be cited which supports this view. On the other hand theoretical predictions of PWR type reactor pressure vessel failure rates have shown an increasing failure rate with time. It is shown that these two situations are not necessarily incompatible. If adjustments are made to the input data of the theoretical model to treat a non-nuclear pressure vessel population, the model can produce a failure rate which decreases with time. These adjustments are explained and the results obtained are shown. (author)

  7. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  8. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions......-passive behaviour of the proposed method comes from the combination of the non intrusive behaviour of the passive methods with a better accuracy of the active methods. The simulation results reveal the good accuracy of the proposed method....

  9. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  10. An Implementation of Bigraph Matching

    DEFF Research Database (Denmark)

    Glenstrup, Arne John; Damgaard, Troels Christoffer; Birkedal, Lars

    We describe a provably sound and complete matching algorithm for bigraphical reactive systems. The algorithm has been implemented in our BPL Tool, a first implementation of bigraphical reactive systems. We describe the tool and present a concrete example of how it can be used to simulate a model ...... of a mobile phone system in a bigraphical representation of the polyadic π calculus.......We describe a provably sound and complete matching algorithm for bigraphical reactive systems. The algorithm has been implemented in our BPL Tool, a first implementation of bigraphical reactive systems. We describe the tool and present a concrete example of how it can be used to simulate a model...

  11. Expanding the methodological toolbox of HRM researchers : The added value of latent bathtub models and optimal matching analysis

    NARCIS (Netherlands)

    van der Laken, P.A.; Bakk, Zsuzsa; Giagkoulas, Vasileios; van Leeuwen, Linda; Bongenaar, Esther

    2018-01-01

    Researchers frequently rely on general linear models (GLMs) to investigate the impact of human resource management (HRM) decisions. However, the structure of organizations and recent technological advancements in the measurement of HRM processes cause contemporary HR data to be hierarchical and/or

  12. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  13. Modeling and Control of Multivariable Process Using Intelligent Techniques

    Directory of Open Access Journals (Sweden)

    Subathra Balasubramanian

    2010-10-01

    Full Text Available For nonlinear dynamic systems, the first principles based modeling and control is difficult to implement. In this study, a fuzzy controller and recurrent fuzzy controller are developed for MIMO process. Fuzzy logic controller is a model free controller designed based on the knowledge about the process. In fuzzy controller there are two types of rule-based fuzzy models are available: one the linguistic (Mamdani model and the other is Takagi–Sugeno model. Of these two, Takagi-Sugeno model (TS has attracted most attention. The fuzzy controller application is limited to static processes due to their feedforward structure. But, most of the real-time processes are dynamic and they require the history of input/output data. In order to store the past values a memory unit is needed, which is introduced by the recurrent structure. The proposed recurrent fuzzy structure is used to develop a controller for the two tank heating process. Both controllers are designed and implemented in a real time environment and their performance is compared.

  14. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  15. A study on the modeling techniques using LS-INGRID

    International Nuclear Information System (INIS)

    Ku, J. H.; Park, S. W.

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions

  16. New Cosmic Center Universe Model Matches Eight of Big Bang's Major Predictions Without The F-L Paradigm

    CERN Document Server

    Gentry, R V

    2003-01-01

    Accompanying disproof of the F-L expansion paradigm eliminates the basis for expansion redshifts, which in turn eliminates the basis for the Cosmological Principle. The universe is not the same everywhere. Instead the spherical symmetry of the cosmos demanded by the Hubble redshift relation proves the universe is isotropic about a nearby universal Center. This is the foundation of the relatively new Cosmic Center Universe (CCU) model, which accounts for, explains, or predicts: (i) The Hubble redshift relation, (ii) a CBR redshift relation that fits all current CBR measurements, (iii) the recently discovered velocity dipole distribution of radiogalaxies, (iv) the well-known time dilation of SNeIa light curves, (v) the Sunyaev-Zeldovich thermal effect, (vi) Olber's paradox, (vii) SN dimming for z 1 an enhanced brightness that fits SN 1997ff measurements, (ix) the existence of extreme redshift (z > 10) objects which, when observed, will further distinguish it from the big bang. The CCU model also plausibly expl...

  17. GENERALIZATION TECHNIQUE FOR 2D+SCALE DHE DATA MODEL

    Directory of Open Access Journals (Sweden)

    H. Karim

    2016-10-01

    Full Text Available Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information in scale dimension could be used for the future 3D-scale applications.

  18. An Implementation of the Frequency Matching Method

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer

    During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One of these......During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One...... of these methods is the recently proposed Frequency Matching method to compute the maximum a posteriori model of an inverse problem where multiple-point statistics, learned from a training image, is used to formulate a closed form expression for an a priori probability density function. This paper discusses...... aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method...

  19. Probabilistic seismic history matching using binary images

    Science.gov (United States)

    Davolio, Alessandra; Schiozer, Denis Jose

    2018-02-01

    Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new

  20. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  1. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  2. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  3. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... though extant literature has shown the importance of formal modelling techniques, the impact of utilising these techniques remains relatively unknown. Therefore, this article studies three main areas: (1) the impact of using modelling techniques based on Unified Modelling Language (UML), in which...... ability to reduce the number of product variants. This paper contributes to an increased understanding of what companies can gain from using more formalised modelling techniques in configurator projects, and under what circumstances they should be used....

  4. Biliary System Architecture: Experimental Models and Visualization Techniques

    Czech Academy of Sciences Publication Activity Database

    Sarnová, Lenka; Gregor, Martin

    2017-01-01

    Roč. 66, č. 3 (2017), s. 383-390 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LQ1604; GA ČR GA15-23858S Institutional support: RVO:68378050 Keywords : Biliary system * Mouse model * Cholestasis * Visualisation * Morphology Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 1.461, year: 2016

  5. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  6. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  7. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    DA shows all seven parameters (CO, O3, PM10, SO2, NOx, NO and NO2) gave the most significant variables after stepwise backward mode. PCA identifies the major source of air pollution is due to combustion of fossil fuels in motor vehicles and industrial activities. The ANN model shows a better prediction compared to the ...

  8. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  9. QCD next-to-leading order predictions matched to parton showers for vector-like quark models

    CERN Document Server

    Fuks, Benjamin

    2017-02-27

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair-production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks...

  10. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.

    Science.gov (United States)

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-01-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  11. Dynamic Caliper Matching

    OpenAIRE

    Paweł Strawiński

    2011-01-01

    Matched sampling is a methodology used to estimate treatment effects. A caliper mechanism is used to achieve better similarity among matched pairs. We investigate finite sample properties of matching with calipers and propose a slight modification to the existing mechanism. The simulation study compares the performance of both methods and shows that a standard caliper performs well only in case of constant treatment or uniform propensity score distribution. Secondly, in a case of non-uniform ...

  12. Anomalous dispersion enhanced Cerenkov phase-matching

    Energy Technology Data Exchange (ETDEWEB)

    Kowalczyk, T.C.; Singer, K.D. [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Physics; Cahill, P.A. [Sandia National Labs., Albuquerque, NM (United States)

    1993-11-01

    The authors report on a scheme for phase-matching second harmonic generation in polymer waveguides based on the use of anomalous dispersion to optimize Cerenkov phase matching. They have used the theoretical results of Hashizume et al. and Onda and Ito to design an optimum structure for phase-matched conversion. They have found that the use of anomalous dispersion in the design results in a 100-fold enhancement in the calculated conversion efficiency. This technique also overcomes the limitation of anomalous dispersion phase-matching which results from absorption at the second harmonic. Experiments are in progress to demonstrate these results.

  13. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  14. Techniques for studies of unbinned model independent CP violation

    Energy Technology Data Exchange (ETDEWEB)

    Bedford, Nicholas; Weisser, Constantin; Parkes, Chris; Gersabeck, Marco; Brodzicka, Jolanta; Chen, Shanzhen [University of Manchester (United Kingdom)

    2016-07-01

    Charge-Parity (CP) violation is a known part of the Standard Model and has been observed and measured in both the B and K meson systems. The observed levels, however, are insufficient to explain the observed matter-antimatter asymmetry in the Universe, and so other sources need to be found. One area of current investigation is the D meson system, where predicted levels of CP violation are much lower than in the B and K meson systems. This means that more sensitive methods are required when searching for CP violation in this system. Several unbinned model independent methods have been proposed for this purpose, all of which need to be optimised and their sensitivities compared.

  15. Suitability of sheet bending modelling techniques in CAPP applications

    OpenAIRE

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and FEM simulations are discussed against the background of the required predictable accuracy in small-batch part manufacturing and FMS environments. The topics are limited to those relevant to bending...

  16. Solving microwave heating model using Hermite-Pade approximation technique

    International Nuclear Information System (INIS)

    Makinde, O.D.

    2005-11-01

    We employ the Hermite-Pade approximation method to explicitly construct the approximate solution of steady state reaction- diffusion equations with source term that arises in modeling microwave heating in an infinite slab with isothermal walls. In particular, we consider the case where the source term decreases spatially and increases with temperature. The important properties of the temperature fields including bifurcations and thermal criticality are discussed. (author)

  17. PLATO: PSF modelling using a micro-scanning technique

    Directory of Open Access Journals (Sweden)

    Ouazzani R-M.

    2015-01-01

    Full Text Available The PLATO space mission is designed to detect telluric planets in the habitable zone of solar type stars, and simultaneously characterise the host star using ultra high precision photometry. The photometry will be performed on board using weighted masks. However, to reach the required precision, corrections will have to be performed by the ground segment and will rely on precise knowledge of the instrument PSF (Point Spread Function. We here propose to model the PSF using a microscanning method.

  18. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    Science.gov (United States)

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  19. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  20. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  1. An open data repository and a data processing software toolset of an equivalent Nordic grid model matched to historical electricity market data.

    Science.gov (United States)

    Vanfretti, Luigi; Olsen, Svein H; Arava, V S Narasimham; Laera, Giuseppe; Bidadfar, Ali; Rabuzin, Tin; Jakobsen, Sigurd H; Lavenius, Jan; Baudette, Maxime; Gómez-López, Francisco J

    2017-04-01

    This article presents an open data repository, the methodology to generate it and the associated data processing software developed to consolidate an hourly snapshot historical data set for the year 2015 to an equivalent Nordic power grid model (aka Nordic 44), the consolidation was achieved by matching the model׳s physical response w.r.t historical power flow records in the bidding regions of the Nordic grid that are available from the Nordic electricity market agent, Nord Pool. The model is made available in the form of CIM v14, Modelica and PSS/E (Siemens PTI) files. The Nordic 44 model in Modelica and PSS/E were first presented in the paper titled "iTesla Power Systems Library (iPSL): A Modelica library for phasor time-domain simulations" (Vanfretti et al., 2016) [1] for a single snapshot. In the digital repository being made available with the submission of this paper (SmarTSLab_Nordic44 Repository at Github, 2016) [2], a total of 8760 snapshots (for the year 2015) that can be used to initialize and execute dynamic simulations using tools compatible with CIM v14, the Modelica language and the proprietary PSS/E tool are provided. The Python scripts to generate the snapshots (processed data) are also available with all the data in the GitHub repository (SmarTSLab_Nordic44 Repository at Github, 2016) [2]. This Nordic 44 equivalent model was also used in iTesla project (iTesla) [3] to carry out simulations within a dynamic security assessment toolset (iTesla, 2016) [4], and has been further enhanced during the ITEA3 OpenCPS project (iTEA3) [5]. The raw, processed data and output models utilized within the iTesla platform (iTesla, 2016) [4] are also available in the repository. The CIM and Modelica snapshots of the "Nordic 44" model for the year 2015 are available in a Zenodo repository.

  2. Robust image modeling technique with a bioluminescence image segmentation application

    Science.gov (United States)

    Zhong, Jianghong; Wang, Ruiping; Tian, Jie

    2009-02-01

    A robust pattern classifier algorithm for the variable symmetric plane model, where the driving noise is a mixture of a Gaussian and an outlier process, is developed. The veracity and high-speed performance of the pattern recognition algorithm is proved. Bioluminescence tomography (BLT) has recently gained wide acceptance in the field of in vivo small animal molecular imaging. So that it is very important for BLT to how to acquire the highprecision region of interest in a bioluminescence image (BLI) in order to decrease loss of the customers because of inaccuracy in quantitative analysis. An algorithm in the mode is developed to improve operation speed, which estimates parameters and original image intensity simultaneously from the noise corrupted image derived from the BLT optical hardware system. The focus pixel value is obtained from the symmetric plane according to a more realistic assumption for the noise sequence in the restored image. The size of neighborhood is adaptive and small. What's more, the classifier function is base on the statistic features. If the qualifications for the classifier are satisfied, the focus pixel intensity is setup as the largest value in the neighborhood.Otherwise, it will be zeros.Finally,pseudo-color is added up to the result of the bioluminescence segmented image. The whole process has been implemented in our 2D BLT optical system platform and the model is proved.

  3. Semantic Data Matching: Principles and Performance

    Science.gov (United States)

    Deaton, Russell; Doan, Thao; Schweiger, Tom

    Automated and real-time management of customer relationships requires robust and intelligent data matching across widespread and diverse data sources. Simple string matching algorithms, such as dynamic programming, can handle typographical errors in the data, but are less able to match records that require contextual and experiential knowledge. Latent Semantic Indexing (LSI) (Berry et al. ; Deerwester et al. is a machine intelligence technique that can match data based upon higher order structure, and is able to handle difficult problems, such as words that have different meanings but the same spelling, are synonymous, or have multiple meanings. Essentially, the technique matches records based upon context, or mathematically quantifying when terms occur in the same record.

  4. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  5. Use of artificial intelligence as an innovative donor-recipient matching model for liver transplantation: results from a multicenter Spanish study.

    Science.gov (United States)

    Briceño, Javier; Cruz-Ramírez, Manuel; Prieto, Martín; Navasa, Miguel; Ortiz de Urbina, Jorge; Orti, Rafael; Gómez-Bravo, Miguel-Ángel; Otero, Alejandra; Varo, Evaristo; Tomé, Santiago; Clemente, Gerardo; Bañares, Rafael; Bárcena, Rafael; Cuervas-Mons, Valentín; Solórzano, Guillermo; Vinaixa, Carmen; Rubín, Angel; Colmenero, Jordi; Valdivieso, Andrés; Ciria, Rubén; Hervás-Martínez, César; de la Mata, Manuel

    2014-11-01

    There is an increasing discrepancy between the number of potential liver graft recipients and the number of organs available. Organ allocation should follow the concept of benefit of survival, avoiding human-innate subjectivity. The aim of this study is to use artificial-neural-networks (ANNs) for donor-recipient (D-R) matching in liver transplantation (LT) and to compare its accuracy with validated scores (MELD, D-MELD, DRI, P-SOFT, SOFT, and BAR) of graft survival. 64 donor and recipient variables from a set of 1003 LTs from a multicenter study including 11 Spanish centres were included. For each D-R pair, common statistics (simple and multiple regression models) and ANN formulae for two non-complementary probability-models of 3-month graft-survival and -loss were calculated: a positive-survival (NN-CCR) and a negative-loss (NN-MS) model. The NN models were obtained by using the Neural Net Evolutionary Programming (NNEP) algorithm. Additionally, receiver-operating-curves (ROC) were performed to validate ANNs against other scores. Optimal results for NN-CCR and NN-MS models were obtained, with the best performance in predicting the probability of graft-survival (90.79%) and -loss (71.42%) for each D-R pair, significantly improving results from multiple regressions. ROC curves for 3-months graft-survival and -loss predictions were significantly more accurate for ANN than for other scores in both NN-CCR (AUROC-ANN=0.80 vs. -MELD=0.50; -D-MELD=0.54; -P-SOFT=0.54; -SOFT=0.55; -BAR=0.67 and -DRI=0.42) and NN-MS (AUROC-ANN=0.82 vs. -MELD=0.41; -D-MELD=0.47; -P-SOFT=0.43; -SOFT=0.57, -BAR=0.61 and -DRI=0.48). ANNs may be considered a powerful decision-making technology for this dataset, optimizing the principles of justice, efficiency and equity. This may be a useful tool for predicting the 3-month outcome and a potential research area for future D-R matching models. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights

  6. A Phase Matching, Adiabatic Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Lemery, Francois [Hamburg U.; Flöttmann, Klaus [DESY; Kärtner, Franz [CFEL, Hamburg; Piot, Philippe [Northern Illinois U.

    2017-05-01

    Tabletop accelerators are a thing of the future. Reducing their size will require scaling down electromagnetic wavelengths; however, without correspondingly high field gradients, particles will be more susceptible to phase-slippage – especially at low energy. We investigate how an adiabatically-tapered dielectric-lined waveguide could maintain phase-matching between the accelerating mode and electron bunch. We benchmark our simple model with CST and implement it into ASTRA; finally we provide a first glimpse into the beam dynamics in a phase-matching accelerator.

  7. A study of a matching pixel by pixel (MPP) algorithm to establish an empirical model of water quality mapping, as based on unmanned aerial vehicle (UAV) images

    Science.gov (United States)

    Su, Tung-Ching

    2017-06-01

    Linear regression models are a popular choice for the relationships between water quality parameters and bands (or band ratios) of remote sensing data. However, this research regards the phenomena of mixed pixels, specular reflection, and water fluidity as the challenges to establish a robust regression model. Based on the data of measurements in situ and remote sensing data, this study presents an enumeration-based algorithm, called matching pixel by pixel (MPP), and tests its performance in an empirical model of water quality mapping. Four small reservoirs, which cover a mere several hundred-thousand m2, in Kinmen, Taiwan, are selected as the study sites. The multispectral sensors, carried on an unmanned aerial vehicle (UAV), are adopted to acquire remote sensing data regarding water quality parameters, including chlorophyll-a (Chl-a), Secchi disk depth (SDD), and turbidity in the reservoirs. The experimental results indicate that, while MPP can reduce the influence of specular reflection on regression model establishment, specular reflection does hamper the correction of thematic map production. Due to water fluidity, sampling in situ should be followed by UAV imaging as soon as possible. Excluding turbidity, the obtained estimation accuracy can satisfy the national standard.

  8. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  9. Turbine adapted maps for turbocharger engine matching

    Energy Technology Data Exchange (ETDEWEB)

    Tancrez, M. [PSA - Peugeot Citroen, 18 rue des fauvelles, La Garenne-Colombes (France); Galindo, J.; Guardiola, C.; Fajardo, P.; Varnier, O. [CMT - Motores Termicos, Universidad Politecnica de Valencia (Spain)

    2011-01-15

    This paper presents a new representation of the turbine performance maps oriented for turbocharger characterization. The aim of this plot is to provide a more compact and suited form to implement in engine simulation models and to interpolate data from turbocharger test bench. The new map is based on the use of conservative parameters as turbocharger power and turbine mass flow to describe the turbine performance in all VGT positions. The curves obtained are accurately fitted with quadratic polynomials and simple interpolation techniques give reliable results. Two turbochargers characterized in an steady flow rig were used for illustrating the representation. After being implemented in a turbocharger submodel, the results obtained with the model have been compared with success against turbine performance evaluated in engine tests cells. A practical application in turbocharger matching is also provided to show how this new map can be directly employed in engine design. (author)

  10. Sequence Matching Analysis for Curriculum Development

    Directory of Open Access Journals (Sweden)

    Liem Yenny Bendatu

    2015-06-01

    Full Text Available Many organizations apply information technologies to support their business processes. Using the information technologies, the actual events are recorded and utilized to conform with predefined model. Conformance checking is an approach to measure the fitness and appropriateness between process model and actual events. However, when there are multiple events with the same timestamp, the traditional approach unfit to result such measures. This study attempts to develop a sequence matching analysis. Considering conformance checking as the basis of this approach, this proposed approach utilizes the current control flow technique in process mining domain. A case study in the field of educational process has been conducted. This study also proposes a curriculum analysis framework to test the proposed approach. By considering the learning sequence of students, it results some measurements for curriculum development. Finally, the result of the proposed approach has been verified by relevant instructors for further development.

  11. Ontology Matching Across Domains

    Science.gov (United States)

    2010-05-01

    matching include GMO [1], Anchor-Prompt [2], and Similarity Flooding [3]. GMO is an iterative structural matcher, which uses RDF bipartite graphs to...AFRL under contract# FA8750-09-C-0058. References [1] Hu, W., Jian, N., Qu, Y., Wang, Y., “ GMO : a graph matching for ontologies”, in: Proceedings of

  12. Precision and trueness of dental models manufactured with different 3-dimensional printing techniques.

    Science.gov (United States)

    Kim, Soo-Yeon; Shin, Yoo-Seok; Jung, Hwi-Dong; Hwang, Chung-Ju; Baik, Hyoung-Seon; Cha, Jung-Yul

    2018-01-01

    In this study, we assessed the precision and trueness of dental models printed with 3-dimensional (3D) printers via different printing techniques. Digital reference models were printed 5 times using stereolithography apparatus (SLA), digital light processing (DLP), fused filament fabrication (FFF), and the PolyJet technique. The 3D printed models were scanned and evaluated for tooth, arch, and occlusion measurements. Precision and trueness were analyzed with root mean squares (RMS) for the differences in each measurement. Differences in measurement variables among the 3D printing techniques were analyzed by 1-way analysis of variance (α = 0.05). Except in trueness of occlusion measurements, there were significant differences in all measurements among the 4 techniques (P techniques exhibited significantly different mean RMS values of precision than the SLA (88 ± 14 μm) and FFF (99 ± 14 μm) techniques (P techniques (P techniques (P techniques: SLA (107 ± 11 μm), DLP (143 ± 8 μm), FFF (188 ± 14 μm), and PolyJet (78 ± 9 μm) (P techniques exhibited significantly different mean RMS values of trueness than DLP (469 ± 49 μm) and FFF (409 ± 36 μm) (P techniques showed significant differences in precision of all measurements and in trueness of tooth and arch measurements. The PolyJet and DLP techniques were more precise than the FFF and SLA techniques, with the PolyJet technique having the highest accuracy. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  13. Matched-pair classification

    Energy Technology Data Exchange (ETDEWEB)

    Theiler, James P [Los Alamos National Laboratory

    2009-01-01

    Following an analogous distinction in statistical hypothesis testing, we investigate variants of machine learning where the training set comes in matched pairs. We demonstrate that even conventional classifiers can exhibit improved performance when the input data has a matched-pair structure. Online algorithms, in particular, converge quicker when the data is presented in pairs. In some scenarios (such as the weak signal detection problem), matched pairs can be generated from independent samples, with the effect not only doubling the nominal size of the training set, but of providing the structure that leads to better learning. A family of 'dipole' algorithms is introduced that explicitly takes advantage of matched-pair structure in the input data and leads to further performance gains. Finally, we illustrate the application of matched-pair learning to chemical plume detection in hyperspectral imagery.

  14. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  15. Using simulation models to evaluate ape nest survey techniques.

    Directory of Open Access Journals (Sweden)

    Ryan H Boyko

    Full Text Available BACKGROUND: Conservationists frequently use nest count surveys to estimate great ape population densities, yet the accuracy and precision of the resulting estimates are difficult to assess. METHODOLOGY/PRINCIPAL FINDINGS: We used mathematical simulations to model nest building behavior in an orangutan population to compare the quality of the population size estimates produced by two of the commonly used nest count methods, the 'marked recount method' and the 'matrix method.' We found that when observers missed even small proportions of nests in the first survey, the marked recount method produced large overestimates of the population size. Regardless of observer reliability, the matrix method produced substantial overestimates of the population size when surveying effort was low. With high observer reliability, both methods required surveying approximately 0.26% of the study area (0.26 km(2 out of 100 km(2 in this simulation to achieve an accurate estimate of population size; at or above this sampling effort both methods produced estimates within 33% of the true population size 50% of the time. Both methods showed diminishing returns at survey efforts above 0.26% of the study area. The use of published nest decay estimates derived from other sites resulted in widely varying population size estimates that spanned nearly an entire order of magnitude. The marked recount method proved much better at detecting population declines, detecting 5% declines nearly 80% of the time even in the first year of decline. CONCLUSIONS/SIGNIFICANCE: These results highlight the fact that neither nest surveying method produces highly reliable population size estimates with any reasonable surveying effort, though either method could be used to obtain a gross population size estimate in an area. Conservation managers should determine if the quality of these estimates are worth the money and effort required to produce them, and should generally limit surveying effort to

  16. Latent fingerprint matching.

    Science.gov (United States)

    Jain, Anil K; Feng, Jianjiang

    2011-01-01

    Latent fingerprint identification is of critical importance to law enforcement agencies in identifying suspects: Latent fingerprints are inadvertent impressions left by fingers on surfaces of objects. While tremendous progress has been made in plain and rolled fingerprint matching, latent fingerprint matching continues to be a difficult problem. Poor quality of ridge impressions, small finger area, and large nonlinear distortion are the main difficulties in latent fingerprint matching compared to plain or rolled fingerprint matching. We propose a system for matching latent fingerprints found at crime scenes to rolled fingerprints enrolled in law enforcement databases. In addition to minutiae, we also use extended features, including singularity, ridge quality map, ridge flow map, ridge wavelength map, and skeleton. We tested our system by matching 258 latents in the NIST SD27 database against a background database of 29,257 rolled fingerprints obtained by combining the NIST SD4, SD14, and SD27 databases. The minutiae-based baseline rank-1 identification rate of 34.9 percent was improved to 74 percent when extended features were used. In order to evaluate the relative importance of each extended feature, these features were incrementally used in the order of their cost in marking by latent experts. The experimental results indicate that singularity, ridge quality map, and ridge flow map are the most effective features in improving the matching accuracy.

  17. Full Semantics Preservation in Model Transformation - A Comparison of Proof Techniques

    NARCIS (Netherlands)

    Hülsbusch, Mathias; König, Barbara; Rensink, Arend; Semenyak, Maria; Soltenborn, Christian; Wehrheim, Heike

    Model transformation is a prime technique in modern, model-driven software design. One of the most challenging issues is to show that the semantics of the models is not affected by the transformation. So far, there is hardly any research into this issue, in particular in those cases where the source

  18. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  19. A Novel Model on DST-Induced Transplantation Tolerance by the Transfer of Self-Specific Donor tTregs to a Haplotype-Matched Organ Recipient.

    Science.gov (United States)

    Mohr Gregoriussen, Angelica Maria; Bohr, Henrik Georg

    2017-01-01

    Donor-specific blood transfusion (DST) can lead to significant prolongation of allograft survival in experimental animal models and sometimes human recipients of solid organs. The mechanisms responsible for the beneficial effect on graft survival have been a topic of research and debate for decades and are not yet fully elucidated. Once we discover how the details of the mechanisms involved are linked, we could be within reach of a procedure making it possible to establish donor-specific tolerance with minimal or no immunosuppressive medication. Today, it is well established that CD4+Foxp3+ regulatory T cells (Tregs) are indispensable for maintaining immunological self-tolerance. A large number of animal studies have also shown that Tregs are essential for establishing and maintaining transplantation tolerance. In this paper, we present a hypothesis of one H2-haplotype-matched DST-induced transplantation tolerance (in mice). The formulated hypothesis is based on a re-interpretation of data from an immunogenetic experiment published by Niimi and colleagues in 2000. It is of importance that the naïve recipient mice in this study were never immunosuppressed and were therefore fully immune competent during the course of tolerance induction. Based on the immunological status of the recipients, we suggest that one H2-haplotype-matched self-specific Tregs derived from the transfusion blood can be activated and multiply in the host by binding to antigen-presenting cells presenting allopeptides in their major histocompatibility complex (MHC) class II (MHC-II). We also suggest that the endothelial and epithelial cells within the solid organ allograft upregulate the expression of MHC-II and attract the expanded Treg population to suppress inflammation within the graft. We further suggest that this biological process, here termed MHC-II recruitment, is a vital survival mechanism for organs (or the organism in general) when attacked by an immune system.

  20. Pre-analysis techniques applied to area-based correlation aiming Digital Terrain Model generation

    Directory of Open Access Journals (Sweden)

    Maurício Galo

    2005-12-01

    Full Text Available Area-based matching is an useful procedure in some photogrammetric processes and its results are of crucial importance in applications such as relative orientation, phototriangulation and Digital Terrain Model generation. The successful determination of correspondence depends on radiometric and geometric factors. Considering these aspects, the use of procedures that previously estimate the quality of the parameters to be computed is a relevant issue. This paper describes these procedures and it is shown that the quality prediction can be computed before performing matching by correlation, trough the analysis of the reference window. This procedure can be incorporated in the correspondence process for Digital Terrain Model generation and Phototriangulation. The proposed approach comprises the estimation of the variance matrix of the translations from the gray levels in the reference window and the reduction of the search space using the knowledge of the epipolar geometry. As a consequence, the correlation process becomes more reliable, avoiding the application of matching procedures in doubtful areas. Some experiments with simulated and real data are presented, evidencing the efficiency of the studied strategy.

  1. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  2. Prediction of intracranial findings on CT-scans by alternative modelling techniques

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); M. Smits (Marion); D.W.J. Dippel (Diederik); M.G.M. Hunink (Myriam); E.W. Steyerberg (Ewout)

    2011-01-01

    textabstractBackground: Prediction rules for intracranial traumatic findings in patients with minor head injury are designed to reduce the use of computed tomography (CT) without missing patients at risk for complications. This study investigates whether alternative modelling techniques might

  3. Face recognition using ensemble string matching.

    Science.gov (United States)

    Chen, Weiping; Gao, Yongsheng

    2013-12-01

    In this paper, we present a syntactic string matching approach to solve the frontal face recognition problem. String matching is a powerful partial matching technique, but is not suitable for frontal face recognition due to its requirement of globally sequential representation and the complex nature of human faces, containing discontinuous and non-sequential features. Here, we build a compact syntactic Stringface representation, which is an ensemble of strings. A novel ensemble string matching approach that can perform non-sequential string matching between two Stringfaces is proposed. It is invariant to the sequential order of strings and the direction of each string. The embedded partial matching mechanism enables our method to automatically use every piece of non-occluded region, regardless of shape, in the recognition process. The encouraging results demonstrate the feasibility and effectiveness of using syntactic methods for face recognition from a single exemplar image per person, breaking the barrier that prevents string matching techniques from being used for addressing complex image recognition problems. The proposed method not only achieved significantly better performance in recognizing partially occluded faces, but also showed its ability to perform direct matching between sketch faces and photo faces.

  4. A Shell/3D Modeling Technique for the Analysis of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2000-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with shell finite elements. Double Cantilever Beam, End Notched Flexure, and Single Leg Bending specimens were analyzed first using full 3D finite element models to obtain reference solutions. Mixed mode strain energy release rate distributions were computed using the virtual crack closure technique. The analyses were repeated using the shell/3D technique to study the feasibility for pure mode I, mode II and mixed mode I/II cases. Specimens with a unidirectional layup and with a multidirectional layup were simulated. For a local 3D model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  5. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  6. Propulsion modeling techniques and applications for the NASA Dryden X-30 real-time simulator

    Science.gov (United States)

    Hicks, John W.

    1991-01-01

    An overview is given of the flight planning activities to date in the current National Aero-Space Plane (NASP) program. The government flight-envelope expansion concept and other design flight operational assessments are discussed. The NASA Dryden NASP real-time simulator configuration is examined and hypersonic flight planning simulation propulsion modeling requirements are described. The major propulsion modeling techniques developed by the Edwards flight test team are outlined, and the application value of techniques for developmental hypersonic vehicles are discussed.

  7. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  8. Pediatric MATCH Infographic

    Science.gov (United States)

    Infographic explaining NCI-COG Pediatric MATCH, a cancer treatment clinical trial for children and adolescents, from 1 to 21 years of age, that is testing the use of precision medicine for pediatric cancers.

  9. PUMA: The Positional Update and Matching Algorithm

    Science.gov (United States)

    Line, J. L. B.; Webster, R. L.; Pindor, B.; Mitchell, D. A.; Trott, C. M.

    2017-01-01

    We present new software to cross-match low-frequency radio catalogues: the Positional Update and Matching Algorithm. The Positional Update and Matching Algorithm combines a positional Bayesian probabilistic approach with spectral matching criteria, allowing for confusing sources in the matching process. We go on to create a radio sky model using Positional Update and Matching Algorithm based on the Murchison Widefield Array Commissioning Survey, and are able to automatically cross-match 98.5% of sources. Using the characteristics of this sky model, we create simple simulated mock catalogues on which to test the Positional Update and Matching Algorithm, and find that Positional Update and Matching Algorithm can reliably find the correct spectral indices of sources, along with being able to recover ionospheric offsets. Finally, we use this sky model to calibrate and remove foreground sources from simulated interferometric data, generated using OSKAR (the Oxford University visibility generator). We demonstrate that there is a substantial improvement in foreground source removal when using higher frequency and higher resolution source positions, even when correcting positions by an average of 0.3 arcmin given a synthesised beam-width of 2.3 arcmin.

  10. Validation of a musculoskeletal model of lifting and its application for biomechanical evaluation of lifting techniques.

    Science.gov (United States)

    Mirakhorlo, Mojtaba; Azghani, Mahmood Reza; Kahrizi, Sedighe

    2014-01-01

    Lifting methods, including standing stance and techniques have wide effects on spine loading and stability. Previous studies explored lifting techniques in many biomechanical terms and documented changes in muscular and postural response of body as a function of techniques .However, the impact of standing stance and lifting technique on human musculoskeletal had not been investigated concurrently. A whole body musculoskeletal model of lifting had been built in order to evaluate standing stance impact on muscle activation patterns and spine loading during each distinctive lifting technique. Verified model had been used in different stances width during squat, stoop and semi-squat lifting for examining the effect of standing stance on each lifting technique. The model muscle's activity was validated by experimental muscle EMGs resulting in Pearson's coefficients of greater than 0.8. Results from analytical analyses show that the effect of stance width on biomechanical parameters consists in the lifting technique, depending on what kind of standing stance was used. Standing stance in each distinctive lifting technique exhibit positive and negative aspects and it can't be recommended either one as being better in terms of biomechanical parameters.

  11. New sunshine-based models for predicting global solar radiation using PSO (particle swarm optimization) technique

    International Nuclear Information System (INIS)

    Behrang, M.A.; Assareh, E.; Noghrehabadi, A.R.; Ghanbarzadeh, A.

    2011-01-01

    PSO (particle swarm optimization) technique is applied to estimate monthly average daily GSR (global solar radiation) on horizontal surface for different regions of Iran. To achieve this, five new models were developed as well as six models were chosen from the literature. First, for each city, the empirical coefficients for all models were separately determined using PSO technique. The results indicate that new models which are presented in this study have better performance than existing models in the literature for 10 cities from 17 considered cities in this study. It is also shown that the empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. Some case studies are presented to demonstrate this generalization with the result showing good agreement with the measurements. More importantly, these case studies further validate the models developed, and demonstrate the general applicability of the models developed. Finally, the obtained results of PSO technique were compared with the obtained results of SRTs (statistical regression techniques) on Angstrom model for all 17 cities. The results showed that obtained empirical coefficients for Angstrom model based on PSO have more accuracy than SRTs for all 17 cities. -- Highlights: → The first study to apply an intelligent optimization technique to more accurately determine empirical coefficients in solar radiation models. → New models which are presented in this study have better performance than existing models. → The empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. → A fair comparison between the performance of PSO and SRTs on GSR modeling.

  12. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  13. Matching CCD images to a stellar catalog using locality-sensitive hashing

    Science.gov (United States)

    Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu

    2018-02-01

    The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.

  14. University Reactor Matching Grants Program

    International Nuclear Information System (INIS)

    John Valentine; Farzad Rahnema; Said Abdel-Khalik

    2003-01-01

    During the 2002 Fiscal year, funds from the DOE matching grant program, along with matching funds from the industrial sponsors, have been used to support research in the area of thermal-hydraulics. Both experimental and numerical research projects have been performed. Experimental research focused on two areas: (1) Identification of the root cause mechanism for axial offset anomaly in pressurized water reactors under prototypical reactor conditions, and (2) Fluid dynamic aspects of thin liquid film protection schemes for inertial fusion reactor chambers. Numerical research focused on two areas: (1) Multi-fluid modeling of both two-phase and two-component flows for steam conditioning and mist cooling applications, and (2) Modeling of bounded Rayleigh-Taylor instability with interfacial mass transfer and fluid injection through a porous wall simulating the ''wetted wall'' protection scheme in inertial fusion reactor chambers. Details of activities in these areas are given

  15. Changes in agricultural cropland areas between a water-surplus year and a water-deficit year impacting food security, determined using MODIS 250 m time-series data and spectral matching techniques, in the Krishna river basin (India)

    Science.gov (United States)

    Gumma, Murali Krishna; Thenkabail, Prasad S.; Muralikrishna, I.V.; Velpuri, Naga Manohar; Gangadhararao, P.T.; Dheeravath, V.; Biradar, C.M.; Nalan, S.A.; Gaur, A.

    2011-01-01

    The objective of this study was to investigate the changes in cropland areas as a result of water availability using Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m time-series data and spectral matching techniques (SMTs). The study was conducted in the Krishna River basin in India, a very large river basin with an area of 265 752 km2 (26 575 200 ha), comparing a water-surplus year (2000–2001) and a water-deficit year (2002–2003). The MODIS 250 m time-series data and SMTs were found ideal for agricultural cropland change detection over large areas and provided fuzzy classification accuracies of 61–100% for various land‐use classes and 61–81% for the rain-fed and irrigated classes. The most mixing change occurred between rain-fed cropland areas and informally irrigated (e.g. groundwater and small reservoir) areas. Hence separation of these two classes was the most difficult. The MODIS 250 m-derived irrigated cropland areas for the districts were highly correlated with the Indian Bureau of Statistics data, with R 2-values between 0.82 and 0.86.The change in the net area irrigated was modest, with an irrigated area of 8 669 881 ha during the water-surplus year, as compared with 7 718 900 ha during the water-deficit year. However, this is quite misleading as most of the major changes occurred in cropping intensity, such as changing from higher intensity to lower intensity (e.g. from double crop to single crop). The changes in cropping intensity of the agricultural cropland areas that took place in the water-deficit year (2002–2003) when compared with the water-surplus year (2000–2001) in the Krishna basin were: (a) 1 078 564 ha changed from double crop to single crop, (b) 1 461 177 ha changed from continuous crop to single crop, (c) 704 172 ha changed from irrigated single crop to fallow and (d) 1 314 522 ha changed from minor irrigation (e.g. tanks, small reservoirs) to rain-fed. These are highly significant changes that will

  16. 3D MODELING OF ARCHITECTURE BY EDGE-MATCHING AND INTEGRATING THE POINT CLOUDS OF LASER SCANNER AND THOSE OF DIGITAL CAMERA

    Directory of Open Access Journals (Sweden)

    N. Kochi

    2012-07-01

    Full Text Available We have been developing the stereo-matching method and its system by digital photogrammetry using a digital camera to make 3D Measurement of various objects. We are also developing the technology to process 3D point clouds of enormous amount obtained through Terrestrial Laser Scanner (TLS. But this time, we have developed the technology to produce a Surface-Model by detecting the 3D edges on the stereo-images of digital camera. Then we arrived to register the 3D data obtained from the stereo-images and the 3D edge data detected on the 3D point-cloud of TLS, and thus succeeded to develop the new technology to fuse the 3D data of Camera and TLS. The basic idea is to take stereo-pictures by a digital camera around the areas where the scanner cannot, because of the occlusion. The camera, with the digital photogrammetry, can acquire the data of complicated and hidden areas instantly, thus shutting out the possibility of noises in a blink. The data of the camera are then integrated into the data of the scanner to produce automatically the model of great perfection. In this presentation, therefore, we will show (1 how to detect the 3D edges on the photo images and to detect from the scanner's point-cloud, (2 how to register the data of both 3D edges to produce the unified model, (3 how to assess the accuracy and the speed of analysing process, which turned out to be quite satisfactory.

  17. Equilibrium and matching under price controls

    NARCIS (Netherlands)

    Herings, P.J.J.

    2015-01-01

    The paper considers a one-to-one matching with contracts model in the presence of price controls. This set-up contains two important streams in the matching literature, those with and those without monetary transfers, as special cases and allows for intermediate cases with some restrictions on the

  18. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  19. Large wind power plants modeling techniques for power system simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Larose, Christian; Gagnon, Richard; Turmel, Gilbert; Giroux, Pierre; Brochu, Jacques [IREQ Hydro-Quebec Research Institute, Varennes, QC (Canada); McNabb, Danielle; Lefebvre, Daniel [Hydro-Quebec TransEnergie, Montreal, QC (Canada)

    2009-07-01

    This paper presents efficient modeling techniques for the simulation of large wind power plants in the EMT domain using a parallel supercomputer. Using these techniques, large wind power plants can be simulated in detail, with each wind turbine individually represented, as well as the collector and receiving network. The simulation speed of the resulting models is fast enough to perform both EMT and transient stability studies. The techniques are applied to develop an EMT detailed model of a generic wind power plant consisting of 73 x 1.5-MW doubly-fed induction generator (DFIG) wind turbine. Validation of the modeling techniques is presented using a comparison with a Matlab/SimPowerSystems simulation. To demonstrate the simulation capabilities using these modeling techniques, simulations involving a 120-bus receiving network with two generic wind power plants (146 wind turbines) are performed. The complete system is modeled using the Hypersim simulator and Matlab/SimPowerSystems. The simulations are performed on a 32-processor supercomputer using an EMTP-like solution with a time step of 18.4 {mu}s. The simulation performance is 10 times slower than in real-time, which is a huge gain in performance compared to traditional tools. The simulation is designed to run in real-time so it never stops, resulting in a capability to perform thousand of tests via automatic testing tools. (orig.)

  20. Characterization of climate indices in models and observations using Hurst Exponent and Reyni Entropy Techniques

    Science.gov (United States)

    Newman, D.; Bhatt, U. S.; Wackerbauer, R.; Sanchez, R.; Polyakov, I.

    2009-12-01

    Because models are intrinsically incomplete and evolving, multiple methods are needed to characterize how well models match observations and were their weaknesses lie. For the study of climate, global climate models (GCM) are the primary tool. Therefore, in order to improve our climate modeling confidence and our understanding of the models weakness we need to apply more and more measures of various types until one finds differences. Then we can decide if these differences have important impacts on ones results and what they mean in terms of the weaknesses and missing physics in the models. In this work, we investigate a suite of National Center for Atmospheric Research (NCAR) Community Climate System Model (CCSM3) simulations of varied complexity, from fixed sea surface temperature simulations to fully coupled T85 simulations. Climate indices (e.g. NAO), constructed from the GCM simulations and observed data, are analyzed using Hurst Exponent (R/S) and Reyni Entropy methods to explore long-term and short-term dynamics (i.e. temporal evolution of the time series). These methods identify clear differences between the models and observations as well as between the models. One preliminary finding suggests that fixing midlatitude SSTs to observed values increases the differences between the model and observation dynamics at long time scales.

  1. Uncertainty analysis in rainfall-runoff modelling : Application of machine learning techniques

    NARCIS (Netherlands)

    Shrestha, D.l.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  2. Uncertainty Analysis in Rainfall-Runoff Modelling: Application of Machine Learning Techniques

    NARCIS (Netherlands)

    Shrestha, D.L.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  3. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  4. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  5. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    Science.gov (United States)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  6. Generation of 3-D finite element models of restored human teeth using micro-CT techniques.

    NARCIS (Netherlands)

    Verdonschot, N.J.J.; Fennis, W.M.M.; Kuys, R.H.; Stolk, J.; Kreulen, C.M.; Creugers, N.H.J.

    2001-01-01

    PURPOSE: This article describes the development of a three-dimensional finite element model of a premolar based on a microscale computed tomographic (CT) data-acquisition technique. The development of the model is part of a project studying the optimal design and geometry of adhesive tooth-colored

  7. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  8. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  9. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  10. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  11. Modeling techniques used in the communications link analysis and simulation system (CLASS)

    Science.gov (United States)

    Braun, W. R.; Mckenzie, T. M.

    1985-01-01

    CLASS (Communications Link Analysis and Simulation System) is a software package developed for NASA to predict the communication and tracking performance of the Tracking and Data Relay Satellite System (TDRSS) services. The modeling techniques used in CLASS are described. The components of TDRSS and the performance parameters to be computed by CLASS are too diverse to permit the use of a single technique to evaluate all performance measures. Hence, each CLASS module applies the modeling approach best suited for a particular subsystem and/or performance parameter in terms of model accuracy and computational speed.

  12. Comparison of bag-valve-mask hand-sealing techniques in a simulated model.

    Science.gov (United States)

    Otten, David; Liao, Michael M; Wolken, Robert; Douglas, Ivor S; Mishra, Ramya; Kao, Amanda; Barrett, Whitney; Drasler, Erin; Byyny, Richard L; Haukoos, Jason S

    2014-01-01

    Bag-valve-mask ventilation remains an essential component of airway management. Rescuers continue to use both traditional 1- or 2-handed mask-face sealing techniques, as well as a newer modified 2-handed technique. We compare the efficacy of 1-handed, 2-handed, and modified 2-handed bag-valve-mask technique. In this prospective, crossover study, health care providers performed 1-handed, 2-handed, and modified 2-handed bag-valve-mask ventilation on a standardized ventilation model. Subjects performed each technique for 5 minutes, with 3 minutes' rest between techniques. The primary outcome was expired tidal volume, defined as percentage of total possible expired tidal volume during a 5-minute bout. A specialized inline monitor measured expired tidal volume. We compared 2-handed versus modified 2-handed and 2-handed versus 1-handed techniques. We enrolled 52 subjects: 28 (54%) men, 32 (62%) with greater than or equal to 5 actual emergency bag-valve-mask situations. Median expired tidal volume percentage for 1-handed technique was 31% (95% confidence interval [CI] 17% to 51%); for 2-handed technique, 85% (95% CI 78% to 91%); and for modified 2-handed technique, 85% (95% CI 82% to 90%). Both 2-handed (median difference 47%; 95% CI 34% to 62%) and modified 2-handed technique (median difference 56%; 95% CI 29% to 65%) resulted in significantly higher median expired tidal volume percentages compared with 1-handed technique. The median expired tidal volume percentages between 2-handed and modified 2-handed techniques did not significantly differ from each other (median difference 0; 95% CI -2% to 2%). In a simulated model, both 2-handed mask-face sealing techniques resulted in higher ventilatory tidal volumes than 1-handed technique. Tidal volumes from 2-handed and modified 2-handed techniques did not differ. Rescuers should perform bag-valve-mask ventilation with 2-handed techniques. Copyright © 2013 American College of Emergency Physicians. Published by Mosby

  13. Quantification of intervertebral displacement with a novel MRI-based modeling technique: Assessing measurement bias and reliability with a porcine spine model.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Goubeaux, Craig; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian C

    2017-05-01

    The purpose of this study was to develop a novel magnetic resonance imaging (MRI)-based modeling technique for measuring intervertebral displacements. Here, we present the measurement bias and reliability of the developmental work using a porcine spine model. Porcine lumbar vertebral segments were fitted in a custom-built apparatus placed within an externally calibrated imaging volume of an open-MRI scanner. The apparatus allowed movement of the vertebrae through pre-assigned magnitudes of sagittal and coronal translation and rotation. The induced displacements were imaged with static (T 1 ) and fast dynamic (2D HYCE S) pulse sequences. These images were imported into animation software, in which these images formed a background 'scene'. Three-dimensional models of vertebrae were created using static axial scans from the specimen and then transferred into the animation environment. In the animation environment, the user manually moved the models (rotoscoping) to perform model-to-'scene' matching to fit the models to their image silhouettes and assigned anatomical joint axes to the motion-segments. The animation protocol quantified the experimental translation and rotation displacements between the vertebral models. Accuracy of the technique was calculated as 'bias' using a linear mixed effects model, average percentage error and root mean square errors. Between-session reliability was examined by computing intra-class correlation coefficients (ICC) and the coefficient of variations (CV). For translation trials, a constant bias (β 0 ) of 0.35 (±0.11) mm was detected for the 2D HYCE S sequence (p=0.01). The model did not demonstrate significant additional bias with each mm increase in experimental translation (β 1 Displacement=0.01mm; p=0.69). Using the T 1 sequence for the same assessments did not significantly change the bias (p>0.05). ICC values for the T 1 and 2D HYCE S pulse sequences were 0.98 and 0.97, respectively. For rotation trials, a constant bias (

  14. A novel model surgery technique for LeFort III advancement.

    Science.gov (United States)

    Vachiramon, Amornpong; Yen, Stephen L-K; Lypka, Michael; Bindignavale, Vijay; Hammoudeh, Jeffrey; Reinisch, John; Urata, Mark M

    2007-09-01

    Current techniques for model surgery and occlusal splint fabrication lack the ability to mark, measure and plan the position of the orbital rim for LeFort III and Monobloc osteotomies. This report describes a model surgery technique for planning the three dimensional repositioning of the orbital rims. Dual orbital pointers were used to mark the infraorbital rim during the facebow transfer. These pointer positions were transferred onto the surgical models in order to follow splint-determined movements. Case reports are presented to illustrate how the model surgery technique was used to differentiate the repositioning of the orbital rim from the occlusal correction in single segment and combined LeFort III/LeFort I osteotomies.

  15. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  16. MR angiography with a matched filter

    International Nuclear Information System (INIS)

    De Castro, J.B.; Riederer, S.J.; Lee, J.N.

    1987-01-01

    The technique of matched filtering was applied to a series of cine MR images. The filter was devised to yield a subtraction angiographic image in which direct current components present in the cine series are removed and the signal-to-noise ratio (S/N) of the vascular structures is optimized. The S/N of a matched filter was compared with that of a simple subtraction, in which an image with high flow is subtracted from one with low flow. Experimentally, a range of results from minimal improvement to significant (60%) improvement in S/N was seen in the comparisons of matched filtered subtraction with simple subtraction

  17. Presentation Technique

    International Nuclear Information System (INIS)

    Froejmark, M.

    1992-10-01

    The report presents a wide, easily understandable description of presentation technique and man-machine communication. General fundamentals for the man-machine interface are illustrated, and the factors that affect the interface are described. A model is presented for describing the operators work situation, based on three different levels in the operators behaviour. The operator reacts routinely in the face of simple, known problems, and reacts in accordance with predetermined plans in the face of more complex, recognizable problems. Deep fundamental knowledge is necessary for truly complex questions. Today's technical status and future development have been studied. In the future, the operator interface will be based on standard software. Functions such as zooming, integration of video pictures, and sound reproduction will become common. Video walls may be expected to come into use in situations in which several persons simultaneously need access to the same information. A summary of the fundamental rules for the design of good picture ergonomics and design requirements for control rooms are included in the report. In conclusion, the report describes a presentation technique within the Distribution Automation and Demand Side Management area and analyses the know-how requirements within Vattenfall. If different systems are integrated, such as geographical information systems and operation monitoring systems, strict demands are made on the expertise of the users for achieving a user-friendly technique which is matched to the needs of the human being. (3 figs.)

  18. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  19. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  20. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  1. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  2. Schema matching and mapping

    CERN Document Server

    Bellahsene, Zohra; Rahm, Erhard

    2011-01-01

    Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements

  3. A Shell/3D Modeling Technique for the Analyses of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2001-01-01

    A shell/3D modeling technique was developed for which a local three-dimensional solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local three-dimensional model and the global structural model which has been meshed with plate or shell finite elements. Double Cantilever Beam (DCB), End Notched Flexure (ENF), and Single Leg Bending (SLB) specimens were modeled using the shell/3D technique to study the feasibility for pure mode I (DCB), mode II (ENF) and mixed mode I/II (SLB) cases. Mixed mode strain energy release rate distributions were computed across the width of the specimens using the virtual crack closure technique. Specimens with a unidirectional layup and with a multidirectional layup where the delamination is located between two non-zero degree plies were simulated. For a local three-dimensional model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  4. Modelling techniques for underwater noise generated by tidal turbines in shallow water

    OpenAIRE

    Lloyd, Thomas P.; Turnock, Stephen R.; Humphrey, Victor F.

    2011-01-01

    The modelling of underwater noise sources and their potential impact on the marine environment is considered, focusing on tidal turbines in shallow water. The requirement for device noise prediction as part of environmental impact assessment is outlined and the limited amount of measurement data and modelling research identified. Following the identification of potential noise sources, the dominant flowgenerated sources are modelled using empirical techniques. The predicted sound pressure lev...

  5. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Sreeramana Aithal

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or an operational concept/idea or business system. Based on four constructs Advantages...

  6. Car sharing demand estimation and urban transport demand modelling using stated preference techniques

    OpenAIRE

    Catalano, Mario; Lo Casto, Barbara; Migliore, Marco

    2008-01-01

    The research deals with the use of the stated preference technique (SP) and transport demand modelling to analyse travel mode choice behaviour for commuting urban trips in Palermo, Italy. The principal aim of the study was the calibration of a demand model to forecast the modal split of the urban transport demand, allowing for the possibility of using innovative transport systems like car sharing and car pooling. In order to estimate the demand model parameters, a specific survey was carried ...

  7. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  8. Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering

    Science.gov (United States)

    Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.

    2018-04-01

    Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.

  9. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  10. A novel CT acquisition and analysis technique for breathing motion modeling

    International Nuclear Information System (INIS)

    Low, Daniel A; White, Benjamin M; Lee, Percy P; Thomas, David H; Gaudio, Sergio; Jani, Shyam S; Wu, Xiao; Lamb, James M

    2013-01-01

    To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques. (fast track communication)

  11. Characteristic Evolution and Matching

    Directory of Open Access Journals (Sweden)

    Winicour Jeffrey

    2001-01-01

    Full Text Available I review the development of numerical evolution codes for general relativity based upon the characteristic initial value problem. Progress is traced from the early stage of 1D feasibility studies to current 3D codes that simulate binary black holes. A prime application of characteristic evolution is Cauchy-characteristic matching, which is also reviewed.

  12. Characteristic Evolution and Matching

    Directory of Open Access Journals (Sweden)

    Winicour Jeffrey

    1998-05-01

    Full Text Available We review the development of numerical evolution codes for general relativity based upon the characteristic initial value problem. Progress is traced from the early stage of 1D feasibility studies to current 3D black hole codes that run forever. A prime application of characteristic evolution is Cauchy-characteristic matching, which is also reviewed.

  13. Factorized Graph Matching.

    Science.gov (United States)

    Zhou, Feng; de la Torre, Fernando

    2015-11-19

    Graph matching (GM) is a fundamental problem in computer science, and it plays a central role to solve correspondence problems in computer vision. GM problems that incorporate pairwise constraints can be formulated as a quadratic assignment problem (QAP). Although widely used, solving the correspondence problem through GM has two main limitations: (1) the QAP is NP-hard and difficult to approximate; (2) GM algorithms do not incorporate geometric constraints between nodes that are natural in computer vision problems. To address aforementioned problems, this paper proposes factorized graph matching (FGM). FGM factorizes the large pairwise affinity matrix into smaller matrices that encode the local structure of each graph and the pairwise affinity between edges. Four are the benefits that follow from this factorization: (1) There is no need to compute the costly (in space and time) pairwise affinity matrix; (2) The factorization allows the use of a path-following optimization algorithm, that leads to improved optimization strategies and matching performance; (3) Given the factorization, it becomes straight-forward to incorporate geometric transformations (rigid and non-rigid) to the GM problem. (4) Using a matrix formulation for the GM problem and the factorization, it is easy to reveal commonalities and differences between different GM methods. The factorization also provides a clean connection with other matching algorithms such as iterative closest point; Experimental results on synthetic and real databases illustrate how FGM outperforms state-of-the-art algorithms for GM. The code is available at http://humansensing.cs.cmu.edu/fgm.

  14. Matching Supernovae to Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2016-12-01

    developed a new automated algorithm for matching supernovae to their host galaxies. Their work builds on currently existing algorithms and makes use of information about the nearby galaxies, accounts for the uncertainty of the match, and even includes a machine learning component to improve the matching accuracy.Gupta and collaborators test their matching algorithm on catalogs of galaxies and simulated supernova events to quantify how well the algorithm is able to accurately recover the true hosts.Successful MatchingThe matching algorithms accuracy (purity) as a function of the true supernova-host separation, the supernova redshift, the true hosts brightness, and the true hosts size. [Gupta et al. 2016]The authors find that when the basic algorithm is run on catalog data, it matches supernovae to their hosts with 91% accuracy. Including the machine learning component, which is run after the initial matching algorithm, improves the accuracy of the matching to 97%.The encouraging results of this work which was intended as a proof of concept suggest that methods similar to this could prove very practical for tackling future survey data. And the method explored here has use beyond matching just supernovae to their host galaxies: it could also be applied to other extragalactic transients, such as gamma-ray bursts, tidal disruption events, or electromagnetic counterparts to gravitational-wave detections.CitationRavi R. Gupta et al 2016 AJ 152 154. doi:10.3847/0004-6256/152/6/154

  15. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  16. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  17. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  18. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals.

    Science.gov (United States)

    Potnis, Prashant R; Tsou, Nien-Ti; Huber, John E

    2011-02-16

    The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  19. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  20. Application of rapid prototyping techniques for modelling of anatomical structures in medical training and education.

    Science.gov (United States)

    Torres, K; Staśkiewicz, G; Śnieżyński, M; Drop, A; Maciejewski, R

    2011-02-01

    Rapid prototyping has become an innovative method of fast and cost-effective production of three-dimensional models for manufacturing. Wide access to advanced medical imaging methods allows application of this technique for medical training purposes. This paper presents the feasibility of rapid prototyping technologies: stereolithography, selective laser sintering, fused deposition modelling, and three-dimensional printing for medical education. Rapid prototyping techniques are a promising method for improvement of anatomical education in medical students but also a valuable source of training tools for medical specialists.

  1. A Shell/3D Modeling Technique for Delaminations in Composite Laminates

    Science.gov (United States)

    Krueger, Ronald

    1999-01-01

    A shell/3D modeling technique was developed for which a local solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provide a kinematically compatible interface between the local 3D model and the global structural model which has been meshed with plate or shell finite elements. For simple double cantilever beam (DCB), end notched flexure (ENF), and single leg bending (SLB) specimens, mixed mode energy release rate distributions were computed across the width from nonlinear finite element analyses using the virtual crack closure technique. The analyses served to test the accuracy of the shell/3D technique for the pure mode I case (DCB), mode II case (ENF) and a mixed mode I/II case (SLB). Specimens with a unidirectional layup where the delamination is located between two 0 plies, as well as a multidirectional layup where the delamination is located between two non-zero degree plies, were simulated. For a local 3D model extending to a minimum of about three specimen thicknesses in front of and behind the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  2. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  3. Integrated workflow for computer assisted history matching on a channelized reservoir

    NARCIS (Netherlands)

    Peters, E.; Wilschut, F.; Leeuwenburgh, O.; Hooff, P.M.E. van

    2011-01-01

    Increasingly computer assisted techniques are used for history matching reservoir models. Such methods will become indispensable in view of the increasing amount of information generated by intelligent wells, in which case manual interpretation becomes too time consuming. Also, with the increasing

  4. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  5. INFORMATION SYSTEMS AUDIT CURRICULA CONTENT MATCHING

    OpenAIRE

    Vasile-Daniel CARDOȘ; Ildikó Réka CARDOȘ

    2014-01-01

    Financial and internal auditors must cope with the challenge of performing their mission in technology enhanced environment. In this article we match the information technology description found in the International Federation of Accountants (IFAC) and the Institute of Internal Auditors (IIA) curricula against the Model Curriculum issued by the Information Systems Audit and Control Association (ISACA). By reviewing these three curricula, we matched the content in the ISACA Model Curriculum wi...

  6. Characterization and fault diagnosis of PAFC cathode by EIS technique and a novel mathematical model approach

    Science.gov (United States)

    Choudhury, Suman Roy; Rengaswamy, Raghunathan

    Considerable ongoing research exists in the area of fuel cells for power and distributed power generation. Of the various types of fuel cells, phosphoric acid fuel cell (PAFC) is a mature technology and is under limited production in various parts of the world. Electrochemical impedance spectroscopy (EIS) is a powerful tool which can be used for characterization of PAFC cathodes. In EIS, the electrode response is analyzed against an equivalent electrical circuit for diagnostics. A shortcoming of this approach is that multiple circuits can match the same response and correlating the physical electrode parameters with the circuit components may become quite difficult. To avoid this, a mathematical model of the PAFC cathode that is easy to solve is developed. The relative value and position of the maximum phase angle with respect to frequency is proposed as a diagnostic marker. A diagnostics table based on this marker is developed using the simulation of the mathematical model and the results are experimentally verified.

  7. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  8. Hybrid models for hydrological forecasting : Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  9. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    Science.gov (United States)

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  10. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  11. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  12. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  13. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  14. A review of techniques for spatial modeling in geographical, conservation and landscape genetics.

    Science.gov (United States)

    Diniz-Filho, José Alexandre Felizola; Nabout, João Carlos; de Campos Telles, Mariana Pires; Soares, Thannya Nascimento; Rangel, Thiago Fernando L V B

    2009-04-01

    Most evolutionary processes occur in a spatial context and several spatial analysis techniques have been employed in an exploratory context. However, the existence of autocorrelation can also perturb significance tests when data is analyzed using standard correlation and regression techniques on modeling genetic data as a function of explanatory variables. In this case, more complex models incorporating the effects of autocorrelation must be used. Here we review those models and compared their relative performances in a simple simulation, in which spatial patterns in allele frequencies were generated by a balance between random variation within populations and spatially-structured gene flow. Notwithstanding the somewhat idiosyncratic behavior of the techniques evaluated, it is clear that spatial autocorrelation affects Type I errors and that standard linear regression does not provide minimum variance estimators. Due to its flexibility, we stress that principal coordinate of neighbor matrices (PCNM) and related eigenvector mapping techniques seem to be the best approaches to spatial regression. In general, we hope that our review of commonly used spatial regression techniques in biology and ecology may aid population geneticists towards providing better explanations for population structures dealing with more complex regression problems throughout geographic space.

  15. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  17. New techniques for the analysis of manual control systems. [mathematical models of human operator behavior

    Science.gov (United States)

    Bekey, G. A.

    1971-01-01

    Studies are summarized on the application of advanced analytical and computational methods to the development of mathematical models of human controllers in multiaxis manual control systems. Specific accomplishments include the following: (1) The development of analytical and computer methods for the measurement of random parameters in linear models of human operators. (2) Discrete models of human operator behavior in a multiple display situation were developed. (3) Sensitivity techniques were developed which make possible the identification of unknown sampling intervals in linear systems. (4) The adaptive behavior of human operators following particular classes of vehicle failures was studied and a model structure proposed.

  18. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  19. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  20. Applications of the soft computing in the automated history matching

    Energy Technology Data Exchange (ETDEWEB)

    Silva, P.C.; Maschio, C.; Schiozer, D.J. [Unicamp (Brazil)

    2006-07-01

    Reservoir management is a research field in petroleum engineering that optimizes reservoir performance based on environmental, political, economic and technological criteria. Reservoir simulation is based on geological models that simulate fluid flow. Models must be constantly corrected to yield the observed production behaviour. The process of history matching is controlled by the comparison of production data, well test data and measured data from simulations. Parametrization, objective function analysis, sensitivity analysis and uncertainty analysis are important steps in history matching. One of the main challenges facing automated history matching is to develop algorithms that find the optimal solution in multidimensional search spaces. Optimization algorithms can be either global optimizers that work with noisy multi-modal functions, or local optimizers that cannot work with noisy multi-modal functions. The problem with global optimizers is the very large number of function calls, which is an inconvenience due to the long reservoir simulation time. For that reason, techniques such as least squared, thin plane spline, kriging and artificial neural networks (ANN) have been used as substitutes to reservoir simulators. This paper described the use of optimization algorithms to find optimal solution in automated history matching. Several ANN were used, including the generalized regression neural network, fuzzy system with subtractive clustering and radial basis network. The UNIPAR soft computing method was used along with a modified Hooke- Jeeves optimization method. Two case studies with synthetic and real reservoirs are examined. It was concluded that the combination of global and local optimization has the potential to improve the history matching process and that the use of substitute models can reduce computational efforts. 15 refs., 11 figs.