WorldWideScience

Sample records for model matching techniques

  1. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  2. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  3. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  4. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.; Datta-Gupta, A.; Ma, X.; Mallick, B.

    2009-01-01

    the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  5. A new frequency matching technique for FRF-based model updating

    Science.gov (United States)

    Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng

    2017-05-01

    Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.

  6. Role model and prototype matching

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    ’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype...

  7. Techniques Used in String Matching for Network Security

    OpenAIRE

    Jamuna Bhandari

    2014-01-01

    String matching also known as pattern matching is one of primary concept for network security. In this area the effectiveness and efficiency of string matching algorithms is important for applications in network security such as network intrusion detection, virus detection, signature matching and web content filtering system. This paper presents brief review on some of string matching techniques used for network security.

  8. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  9. 2D and 3D modeling of wave propagation in cold magnetized plasma near the Tore Supra ICRH antenna relying on the perfecly matched layer technique

    International Nuclear Information System (INIS)

    Jacquot, J; Colas, L; Clairet, F; Goniche, M; Hillairet, J; Lombard, G; Heuraux, S; Milanesio, D

    2013-01-01

    A novel method to simulate ion cyclotron wave coupling in the edge of a tokamak plasma with the finite element technique is presented. It is applied in the commercial software COMSOL Multiphysics. Its main features include the perfectly matched layer (PML) technique to emulate radiating boundary conditions beyond a critical cutoff layer for the fast wave (FW), full-wave propagation across the inhomogeneous cold peripheral plasma and a detailed description of the wave launcher geometry. The PML technique, while widely used in numerical simulations of wave propagation, has scarcely been used for magnetized plasmas, due to specificities of this gyrotropic material. A versatile PML formulation, valid for full dielectric tensors, is summarized and interpreted as wave propagation in an artificial medium. The behavior of this technique has been checked for plane waves on homogeneous plasmas. Wave reflection has been quantified and compared to analytical predictions. An incompatibility issue for adapting the PML for forward (FW) and backward (slow wave (SW)) propagating waves simultaneously has been evidenced. In a tokamak plasma, this critical issue is overcome by taking advantage of the inhomogeneous density profile to reflect the SW before it reaches the PML. The simulated coupling properties of a Tore Supra ion cyclotron resonance heating (ICRH) antenna have been compared to experimental values in a situation of good single-pass absorption. The necessary antenna elements to include in the geometry to recover the coupling properties obtained experimentally are also discussed. (paper)

  10. An improved perfectly matched layer in the eigenmode expansion technique

    DEFF Research Database (Denmark)

    Gregersen, Niels; Mørk, Jesper

    2008-01-01

    When employing the eigenmode expansion technique (EET), parasitic reflections at the boundary of the computational domain can be suppressed by introducing a perfectly matched layer (PML). However, the traditional PML, suffers from an artificial field divergence limiting its usefulness. We propose...

  11. Weed identification using an automated active shape matching (AASM) technique

    DEFF Research Database (Denmark)

    C. Swain, Kishore; Nørremark, Michael; Jørgensen, Rasmus Nyholm

    2011-01-01

    Weed identification and control is a challenge for intercultural operations in agriculture. As an alternative to chemical pest control, a smart weed identification technique followed by mechanical weed control system could be developed. The proposed smart identification technique works on the con......Weed identification and control is a challenge for intercultural operations in agriculture. As an alternative to chemical pest control, a smart weed identification technique followed by mechanical weed control system could be developed. The proposed smart identification technique works...... on the concept of ‘active shape modelling’ to identify weed and crop plants based on their morphology. The automated active shape matching system (AASM) technique consisted of, i) a Pixelink camera ii) an LTI (Lehrstuhlfuer technische informatik) image processing library, iii) a laptop pc with the Linux OS. A 2...

  12. Technique to match mantle and para-aortic fields

    International Nuclear Information System (INIS)

    Lutz, W.R.; Larsen, R.D.

    1983-01-01

    A technique is described to match the mantle and para-aortic fields used in treatment of Hodgkin's disease, when the patient is treated alternately in supine and prone position. The approach is based on referencing the field edges to a point close to the vertebral column, where uncontrolled motion is minimal and where accurate matching is particularly important. Fiducial surface points are established in the simulation process to accomplish the objective. Dose distributions have been measured to study the combined effect of divergence differences, changes in body angulation and setup errors. Even with the most careful technique, the use of small cord blocks of 50% transmission is an advisable precaution for the posterior fields

  13. History Matching: Towards Geologically Reasonable Models

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Cordua, Knud Skou; Mosegaard, Klaus

    This work focuses on the development of a new method for history matching problem that through a deterministic search finds a geologically feasible solution. Complex geology is taken into account evaluating multiple point statistics from earth model prototypes - training images. Further a function...... that measures similarity between statistics of a training image and statistics of any smooth model is introduced and its analytical gradient is computed. This allows us to apply any gradientbased method to history matching problem and guide a solution until it satisfies both production data and complexity...

  14. On a special case of model matching

    Czech Academy of Sciences Publication Activity Database

    Zagalak, Petr

    2004-01-01

    Roč. 77, č. 2 (2004), s. 164-172 ISSN 0020-7179 R&D Projects: GA ČR GA102/01/0608 Institutional research plan: CEZ:AV0Z1075907 Keywords : linear systems * state feedback * model matching Subject RIV: BC - Control Systems Theory Impact factor: 0.702, year: 2004

  15. Object matching using a locally affine invariant and linear programming techniques.

    Science.gov (United States)

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  16. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  17. An improved perfectly matched layer for the eigenmode expansion technique

    DEFF Research Database (Denmark)

    Gregersen, Niels; Mørk, Jesper

    2008-01-01

    be suppressed by introducing a perfectly matched layer (PML) using e.g. complex coordinate stretching of the cylinder radius. However, the traditional PML suffers from an artificial field divergence limiting its usefulness. We show that the choice of a constant cylinder radius leads to mode profiles...

  18. Modelling relationships between match events and match outcome in elite football.

    Science.gov (United States)

    Liu, Hongyou; Hopkins, Will G; Gómez, Miguel-Angel

    2016-08-01

    Identifying match events that are related to match outcome is an important task in football match analysis. Here we have used generalised mixed linear modelling to determine relationships of 16 football match events and 1 contextual variable (game location: home/away) with the match outcome. Statistics of 320 close matches (goal difference ≤ 2) of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Relationships were evaluated with magnitude-based inferences and were expressed as extra matches won or lost per 10 close matches for an increase of two within-team or between-team standard deviations (SD) of the match event (representing effects of changes in team values from match to match and of differences between average team values, respectively). There was a moderate positive within-team effect from shots on target (3.4 extra wins per 10 matches; 99% confidence limits ±1.0), and a small positive within-team effect from total shots (1.7 extra wins; ±1.0). Effects of most other match events were related to ball possession, which had a small negative within-team effect (1.2 extra losses; ±1.0) but a small positive between-team effect (1.7 extra wins; ±1.4). Game location showed a small positive within-team effect (1.9 extra wins; ±0.9). In analyses of nine combinations of team and opposition end-of-season rank (classified as high, medium, low), almost all between-team effects were unclear, while within-team effects varied depending on the strength of team and opposition. Some of these findings will be useful to coaches and performance analysts when planning training sessions and match tactics.

  19. Parikh Matching in the Streaming Model

    DEFF Research Database (Denmark)

    Lee, Lap-Kei; Lewenstein, Moshe; Zhang, Qin

    2012-01-01

    Let S be a string over an alphabet Σ = {σ1, σ2, …}. A Parikh-mapping maps a substring S′ of S to a |Σ|-length vector that contains, in location i of the vector, the count of σi in S′. Parikh matching refers to the problem of finding all substrings of a text T which match to a given input |Σ|-leng...

  20. An accelerated image matching technique for UAV orthoimage registration

    Science.gov (United States)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  1. An Efficient Metric of Automatic Weight Generation for Properties in Instance Matching Technique

    OpenAIRE

    Seddiqui, Md. Hanif; Nath, Rudra Pratap Deb; Aono, Masaki

    2015-01-01

    The proliferation of heterogeneous data sources of semantic knowledge base intensifies the need of an automatic instance matching technique. However, the efficiency of instance matching is often influenced by the weight of a property associated to instances. Automatic weight generation is a non-trivial, however an important task in instance matching technique. Therefore, identifying an appropriate metric for generating weight for a property automatically is nevertheless a formidab...

  2. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Park, So Hyun; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations.

  3. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique

    International Nuclear Information System (INIS)

    Song, Min Seop; Park, So Hyun; Kim, Eung Soo

    2014-01-01

    Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations

  4. Alarm handling systems and techniques developed to match operator tasks

    Energy Technology Data Exchange (ETDEWEB)

    Bye, A; Moum, B R [Institutt for Energiteknikk, Halden (Norway). OECD Halden Reaktor Projekt

    1997-09-01

    This paper covers alarm handling methods and techniques explored at the Halden Project, and describes current status on the research activities on alarm systems. Alarm systems are often designed by application of a bottom-up strategy, generating alarms at component level. If no structuring of the alarms is applied, this may result in alarm avalanches in major plant disturbances, causing cognitive overload of the operator. An alarm structuring module should be designed using a top-down approach, analysing operator`s tasks, plant states, events and disturbances. One of the operator`s main tasks during plant disturbances is status identification, including determination of plant status and detection of plant anomalies. The main support of this is provided through the alarm systems, the process formats, the trends and possible diagnosis systems. The alarm system should both physically and conceptually be integrated with all these systems. 9 refs, 5 figs.

  5. Alarm handling systems and techniques developed to match operator tasks

    International Nuclear Information System (INIS)

    Bye, A.; Moum, B.R.

    1997-01-01

    This paper covers alarm handling methods and techniques explored at the Halden Project, and describes current status on the research activities on alarm systems. Alarm systems are often designed by application of a bottom-up strategy, generating alarms at component level. If no structuring of the alarms is applied, this may result in alarm avalanches in major plant disturbances, causing cognitive overload of the operator. An alarm structuring module should be designed using a top-down approach, analysing operator's tasks, plant states, events and disturbances. One of the operator's main tasks during plant disturbances is status identification, including determination of plant status and detection of plant anomalies. The main support of this is provided through the alarm systems, the process formats, the trends and possible diagnosis systems. The alarm system should both physically and conceptually be integrated with all these systems. 9 refs, 5 figs

  6. The abrasive blasting technique. Matching the waste minimisation precept

    International Nuclear Information System (INIS)

    Welbers, Philipp; Noll, Thomas; Braehler, Georg; Sohnius, Bern

    2010-01-01

    Nowadays main challenges in the nuclear industry are, besides the development and design of new facilities, the dismantling of outlived nuclear installations and subsequent waste handling. Not only Germany but all countries and institutions which are involved in our business face similar problems: A large quantity of slightly contaminated waste, equipment and civil structures, arise inevitably during operation and, especially, during dismantling. This waste occurs in a huge amount due to its bulky nature, e.g. pipe-work. Storage of bulky items is very expensive and would not be compatible with the waste minimisation precept. Treatment in an ecological correct and economical beneficial way is the key factor in dealing with this waste. This means decontamination of the waste up to clearance levels where possible. A suitable solution is the Abrasive Blasting Technique. (orig.)

  7. Generic Energy Matching Model and Figure of Matching Algorithm for Combined Renewable Energy Systems

    Directory of Open Access Journals (Sweden)

    J.C. Brezet

    2009-08-01

    Full Text Available In this paper the Energy Matching Model and Figure of Matching Algorithm which originally was dedicated only to photovoltaic (PV systems [1] are extended towards a Model and Algorithm suitable for combined systems which are a result of integration of two or more renewable energy sources into one. The systems under investigation will range from mobile portable devices up to the large renewable energy system conceivably to be applied at the Afsluitdijk (Closure- dike in the north of the Netherlands. This Afsluitdijk is the major dam in the Netherlands, damming off the Zuiderzee, a salt water inlet of the North Sea and turning it into the fresh water lake of the IJsselmeer. The energy chain of power supplies based on a combination of renewable energy sources can be modeled by using one generic Energy Matching Model as starting point.

  8. The application of computer color matching techniques to the matching of target colors in a food substrate: a first step in the development of foods with customized appearance.

    Science.gov (United States)

    Kim, Sandra; Golding, Matt; Archer, Richard H

    2012-06-01

    A predictive color matching model based on the colorimetric technique was developed and used to calculate the concentrations of primary food dyes needed in a model food substrate to match a set of standard tile colors. This research is the first stage in the development of novel three-dimensional (3D) foods in which color images or designs can be rapidly reproduced in 3D form. Absorption coefficients were derived for each dye, from a concentration series in the model substrate, a microwave-baked cake. When used in a linear, additive blending model these coefficients were able to predict cake color from selected dye blends to within 3 ΔE*(ab,10) color difference units, or within the limit of a visually acceptable match. Absorption coefficients were converted to pseudo X₁₀, Y₁₀, and Z₁₀ tri-stimulus values (X₁₀(P), Y₁₀(P), Z₁₀(P)) for colorimetric matching. The Allen algorithm was used to calculate dye concentrations to match the X₁₀(P), Y₁₀(P), and Z₁₀(P) values of each tile color. Several recipes for each color were computed with the tile specular component included or excluded, and tested in the cake. Some tile colors proved out-of-gamut, limited by legal dye concentrations; these were scaled to within legal range. Actual differences suggest reasonable visual matches could be achieved for within-gamut tile colors. The Allen algorithm, with appropriate adjustments of concentration outputs, could provide a sufficiently rapid and accurate calculation tool for 3D color food printing. The predictive color matching approach shows potential for use in a novel embodiment of 3D food printing in which a color image or design could be rendered within a food matrix through the selective blending of primary dyes to reproduce each color element. The on-demand nature of this food application requires rapid color outputs which could be provided by the color matching technique, currently used in nonfood industries, rather than by empirical food

  9. Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2015-01-01

    Full Text Available This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

  10. Anticipated growth and business cycles in matching models

    NARCIS (Netherlands)

    den Haan, W.J.; Kaltenbrunner, G.

    2009-01-01

    In a business cycle model that incorporates a standard matching framework, employment increases in response to news shocks, even though the wealth effect associated with the increase in expected productivity reduces labor force participation. The reason is that the matching friction induces

  11. Marriage and Divorce in a Model of Matching

    OpenAIRE

    Mumcu, Ayse; Saglam, Ismail

    2006-01-01

    We study the problem of marriage formation and marital distribution in a two-period model of matching, extending the matching with bargaining framework of Crawford and Rochford (1986). We run simulations to find the effects of alimony rate, legal cost of divorce, initial endowments, couple and single productivity parameters on the payoffs and marital status in the society.

  12. A review of the Match technique as applied to AASE-2/EASOE and SOLVE/THESEO 2000

    Directory of Open Access Journals (Sweden)

    G. A. Morris

    2005-01-01

    Full Text Available We apply the NASA Goddard Trajectory Model to data from a series of ozonesondes to derive ozone loss rates in the lower stratosphere for the AASE-2/EASOE mission (January-March 1992 and for the SOLVE/THESEO 2000 mission (January-March 2000 in an approach similar to Match. Ozone loss rates are computed by comparing the ozone concentrations provided by ozonesondes launched at the beginning and end of the trajectories connecting the launches. We investigate the sensitivity of the Match results to the various parameters used to reject potential matches in the original Match technique. While these filters effectively eliminate from consideration 80% of the matched sonde pairs and >99% of matched observations in our study, we conclude that only a filter based on potential vorticity changes along the calculated back trajectories seems warranted. Our study also demonstrates that the ozone loss rates estimated in Match can vary by up to a factor of two depending upon the precise trajectory paths calculated for each trajectory. As a result, the statistical uncertainties published with previous Match results might need to be augmented by an additional systematic error. The sensitivity to the trajectory path is particularly pronounced in the month of January, for which the largest ozone loss rate discrepancies between photochemical models and Match are found. For most of the two study periods, our ozone loss rates agree with those previously published. Notable exceptions are found for January 1992 at 475K and late February/early March 2000 at 450K, both periods during which we generally find smaller loss rates than the previous Match studies. Integrated ozone loss rates estimated by Match in both of those years compare well with those found in numerous other studies and in a potential vorticity/potential temperature approach shown previously and in this paper. Finally, we suggest an alternate approach to Match using trajectory mapping. This approach uses

  13. Money Creation in a Random Matching Model

    OpenAIRE

    Alexei Deviatov

    2006-01-01

    I study money creation in versions of the Trejos-Wright (1995) and Shi (1995) models with indivisible money and individual holdings bounded at two units. I work with the same class of policies as in Deviatov and Wallace (2001), who study money creation in that model. However, I consider an alternative notion of implementability–the ex ante pairwise core. I compute a set of numerical examples to determine whether money creation is beneficial. I find beneficial e?ects of money creation if indiv...

  14. A coupled piezoelectric–electromagnetic energy harvesting technique for achieving increased power output through damping matching

    International Nuclear Information System (INIS)

    Challa, Vinod R; Prasad, M G; Fisher, Frank T

    2009-01-01

    Vibration energy harvesting is being pursued as a means to power wireless sensors and ultra-low power autonomous devices. From a design standpoint, matching the electrical damping induced by the energy harvesting mechanism to the mechanical damping in the system is necessary for maximum efficiency. In this work two independent energy harvesting techniques are coupled to provide higher electrical damping within the system. Here the coupled energy harvesting device consists of a primary piezoelectric energy harvesting device to which an electromagnetic component is added to better match the total electrical damping to the mechanical damping in the system. The first coupled device has a resonance frequency of 21.6 Hz and generates a peak power output of ∼332 µW, compared to 257 and 244 µW obtained from the optimized, stand-alone piezoelectric and electromagnetic energy harvesting devices, respectively, resulting in a 30% increase in power output. A theoretical model has been developed which closely agrees with the experimental results. A second coupled device, which utilizes the d 33 piezoelectric mode, shows a 65% increase in power output in comparison to the corresponding stand-alone, single harvesting mode devices. This work illustrates the design considerations and limitations that one must consider to enhance device performance through the coupling of multiple harvesting mechanisms within a single energy harvesting device

  15. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Energy Technology Data Exchange (ETDEWEB)

    Stubberud, Peter A., E-mail: stubber@ee.unlv.edu [Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, Las Vegas, NV 89154 (United States); Stubberud, Stephen C., E-mail: scstubberud@ieee.org [Oakridge Technology, San Diego, CA 92121 (United States); Stubberud, Allen R., E-mail: stubberud@att.net [Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA 92697 (United States)

    2014-12-10

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design

  16. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    International Nuclear Information System (INIS)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-01-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design

  17. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    Science.gov (United States)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-12-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT

  18. A technique to obtain a multiparameter radar rainfall algorithm using the probability matching procedure

    International Nuclear Information System (INIS)

    Gorgucci, E.; Scarchilli, G.

    1997-01-01

    The natural cumulative distributions of rainfall observed by a network of rain gauges and a multiparameter radar are matched to derive multiparameter radar algorithms for rainfall estimation. The use of multiparameter radar measurements in a statistical framework to estimate rainfall is resented in this paper, The techniques developed in this paper are applied to the radar and rain gauge measurement of rainfall observed in central Florida and central Italy. Conventional pointwise estimates of rainfall are also compared. The probability matching procedure, when applied to the radar and surface measurements, shows that multiparameter radar algorithms can match the probability distribution function better than the reflectivity-based algorithms. It is also shown that the multiparameter radar algorithm derived matching the cumulative distribution function of rainfall provides more accurate estimates of rainfall on the ground in comparison to any conventional reflectivity-based algorithm

  19. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  20. Combining machine learning and matching techniques to improve causal inference in program evaluation.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    Program evaluations often utilize various matching approaches to emulate the randomization process for group assignment in experimental studies. Typically, the matching strategy is implemented, and then covariate balance is assessed before estimating treatment effects. This paper introduces a novel analytic framework utilizing a machine learning algorithm called optimal discriminant analysis (ODA) for assessing covariate balance and estimating treatment effects, once the matching strategy has been implemented. This framework holds several key advantages over the conventional approach: application to any variable metric and number of groups; insensitivity to skewed data or outliers; and use of accuracy measures applicable to all prognostic analyses. Moreover, ODA accepts analytic weights, thereby extending the methodology to any study design where weights are used for covariate adjustment or more precise (differential) outcome measurement. One-to-one matching on the propensity score was used as the matching strategy. Covariate balance was assessed using standardized difference in means (conventional approach) and measures of classification accuracy (ODA). Treatment effects were estimated using ordinary least squares regression and ODA. Using empirical data, ODA produced results highly consistent with those obtained via the conventional methodology for assessing covariate balance and estimating treatment effects. When ODA is combined with matching techniques within a treatment effects framework, the results are consistent with conventional approaches. However, given that it provides additional dimensions and robustness to the analysis versus what can currently be achieved using conventional approaches, ODA offers an appealing alternative. © 2016 John Wiley & Sons, Ltd.

  1. Equilibrium Price Dispersion in a Matching Model with Divisible Money

    NARCIS (Netherlands)

    Kamiya, K.; Sato, T.

    2002-01-01

    The main purpose of this paper is to show that, for any given parameter values, an equilibrium with dispersed prices (two-price equilibrium) exists in a simple matching model with divisible money presented by Green and Zhou (1998).We also show that our two-price equilibrium is unique in certain

  2. Electron/photon matched field technique for treatment of orbital disease

    International Nuclear Information System (INIS)

    Arthur, Douglas W.; Zwicker, Robert D.; Garmon, Pamela W.; Huang, David T.; Schmidt-Ullrich, Rupert K.

    1997-01-01

    Purpose: A number of approaches have been described in the literature for irradiation of malignant and benign diseases of the orbit. Techniques described to date do not deliver a homogeneous dose to the orbital contents while sparing the cornea and lens of excessive dose. This is a result of the geometry encountered in this region and the fact that the target volume, which includes the periorbital and retroorbital tissues but excludes the cornea, anterior chamber, and lens, cannot be readily accommodated by photon beams alone. To improve the dose distribution for these treatments, we have developed a technique that combines a low-energy electron field carefully matched with modified photon fields to achieve acceptable dose coverage and uniformity. Methods and Materials: An anterior electron field and a lateral photon field setup is used to encompass the target volume. Modification of these fields permits accurate matching as well as conformation of the dose distribution to the orbit. A flat-surfaced wax compensator assures uniform electron penetration across the field, and a sunken lead alloy eye block prevents excessive dose to the central structures of the anterior segment. The anterior edge of the photon field is modified by broadening the penumbra using a form of pseudodynamic collimation. Direct measurements using film and ion chamber dosimetry were used to study the characteristics of the fall-off region of the electron field and the penumbra of the photon fields. >From the data collected, the technique for accurate field matching and dose uniformity was generated. Results: The isodose curves produced with this treatment technique demonstrate homogeneous dose coverage of the orbit, including the paralenticular region, and sufficient dose sparing of the anterior segment. The posterior lens accumulates less than 40% of the prescribed dose, and the lateral aspect of the lens receives less than 30%. A dose variation in the match region of ±12% is confronted when

  3. Stability analysis of resistive MHD modes via a new numerical matching technique

    International Nuclear Information System (INIS)

    Furukawa, M.; Tokuda, S.; Zheng, L.-J.

    2009-01-01

    Full text: Asymptotic matching technique is one of the principal methods for calculating linear stability of resistive magnetohydrodynamics (MHD) modes such as tearing modes. In applying the asymptotic method, the plasma region is divided into two regions: a thin inner layer around the mode-resonant surface and ideal MHD regions except for the layer. If we try to solve this asymptotic matching problem numerically, we meet practical difficulties. Firstly, the inertia-less ideal MHD equation or the Newcomb equation has a regular singular point at the mode-resonant surface, leading to the so-called big and small solutions. Since the big solution is not square-integrable, it needs sophisticated treatment. Even if such a treatment is applied, the matching data or the ratio of small solution to the big one, has been revealed to be sensitive to local MHD equilibrium accuracy and grid structure at the mode-resonant surface by numerical experiments. Secondly, one of the independent solutions in the inner layer, which should be matched onto the ideal MHD solution, is not square-integrable. The response formalism has been adopted to resolve this problem. In the present paper, we propose a new method for computing the linear stability of resistive MHD modes via matching technique, where the plasma region is divided into ideal MHD regions and an inner region with finite width. The matching technique using an inner region with finite width was recently developed for ideal MHD modes in cylindrical geometry, and good performance was shown. Our method extends this idea to resistive MHD modes. In the inner region, the low-beta reduced MHD equations are solved, and the solution is matched onto the solution of the Newcomb equation by using boundary conditions such that the parallel electric field vanishes properly as approaching the computational boundaries. If we use the inner region with finite width, the practical difficulties raised above can be avoided from the beginning. Figure

  4. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  5. Enhanced Map-Matching Algorithm with a Hidden Markov Model for Mobile Phone Positioning

    Directory of Open Access Journals (Sweden)

    An Luo

    2017-10-01

    Full Text Available Numerous map-matching techniques have been developed to improve positioning, using Global Positioning System (GPS data and other sensors. However, most existing map-matching algorithms process GPS data with high sampling rates, to achieve a higher correct rate and strong universality. This paper introduces a novel map-matching algorithm based on a hidden Markov model (HMM for GPS positioning and mobile phone positioning with a low sampling rate. The HMM is a statistical model well known for providing solutions to temporal recognition applications such as text and speech recognition. In this work, the hidden Markov chain model was built to establish a map-matching process, using the geometric data, the topologies matrix of road links in road network and refined quad-tree data structure. HMM-based map-matching exploits the Viterbi algorithm to find the optimized road link sequence. The sequence consists of hidden states in the HMM model. The HMM-based map-matching algorithm is validated on a vehicle trajectory using GPS and mobile phone data. The results show a significant improvement in mobile phone positioning and high and low sampling of GPS data.

  6. Analysis of terrain map matching using multisensing techniques for applications to autonomous vehicle navigation

    Science.gov (United States)

    Page, Lance; Shen, C. N.

    1991-01-01

    This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.

  7. Improving Image Matching by Reducing Surface Reflections Using Polarising Filter Techniques

    Science.gov (United States)

    Conen, N.; Hastedt, H.; Kahmen, O.; Luhmann, T.

    2018-05-01

    In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera's orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002) using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm) with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.

  8. Anesthesia Technique and Mortality after Total Hip or Knee Arthroplasty: A Retrospective, Propensity Score-matched Cohort Study.

    Science.gov (United States)

    Perlas, Anahi; Chan, Vincent W S; Beattie, Scott

    2016-10-01

    This propensity score-matched cohort study evaluates the effect of anesthetic technique on a 30-day mortality after total hip or knee arthroplasty. All patients who had hip or knee arthroplasty between January 1, 2003, and December 31, 2014, were evaluated. The principal exposure was spinal versus general anesthesia. The primary outcome was 30-day mortality. Secondary outcomes were (1) perioperative myocardial infarction; (2) a composite of major adverse cardiac events that includes cardiac arrest, myocardial infarction, or newly diagnosed arrhythmia; (3) pulmonary embolism; (4) major blood loss; (5) hospital length of stay; and (6) operating room procedure time. A propensity score-matched-pair analysis was performed using a nonparsimonious logistic regression model of regional anesthetic use. We identified 10,868 patients, of whom 8,553 had spinal anesthesia and 2,315 had general anesthesia. Ninety-two percent (n = 2,135) of the patients who had general anesthesia were matched to similar patients who did not have general anesthesia. In the matched cohort, the 30-day mortality rate was 0.19% (n = 4) in the spinal anesthesia group and 0.8% (n = 17) in the general anesthesia group (risk ratio, 0.42; 95% CI, 0.21 to 0.83; P = 0.0045). Spinal anesthesia was also associated with a shorter hospital length of stay (5.7 vs. 6.6 days; P anesthesia and lower 30-day mortality, as well as a shorter hospital length of stay, after elective joint replacement surgery.

  9. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  10. A new registration method with voxel-matching technique for temporal subtraction images

    Science.gov (United States)

    Itai, Yoshinori; Kim, Hyoungseop; Ishikawa, Seiji; Katsuragawa, Shigehiko; Doi, Kunio

    2008-03-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes on medical images by removing most of normal structures. One of the important problems in temporal subtraction is that subtraction images commonly include artifacts created by slight differences in the size, shape, and/or location of anatomical structures. In this paper, we developed a new registration method with voxel-matching technique for substantially removing the subtraction artifacts on the temporal subtraction image obtained from multiple-detector computed tomography (MDCT). With this technique, the voxel value in a warped (or non-warped) previous image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. Our new method was examined on 16 clinical cases with MDCT images. Preliminary results indicated that interval changes on the subtraction images were enhanced considerably, with a substantial reduction of misregistration artifacts. The temporal subtraction images obtained by use of the voxel-matching technique would be very useful for radiologists in the detection of interval changes on MDCT images.

  11. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form....... By regarding the nominal control system as the desired model, an augmented control system is constructed through the model-matching formulation, such that the current robust control techniques can be usedto synthesize these dynamical modules. One extension of this method with respect to the performance...... recovery besides the functionality recovery is also discussed under this framework. Comparing with the conventional control mixer method, the proposed method considers the recon gured system's stability, performance and robustness simultaneously. Finally, the proposed method is illustrated by a case study...

  12. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  13. MODELING CONTROLLED ASYNCHRONOUS ELECTRIC DRIVES WITH MATCHING REDUCERS AND TRANSFORMERS

    Directory of Open Access Journals (Sweden)

    V. S. Petrushin

    2015-04-01

    Full Text Available Purpose. Working out of mathematical models of the speed-controlled induction electric drives ensuring joint consideration of transformers, motors and loadings, and also matching reducers and transformers, both in static, and in dynamic regimes for the analysis of their operating characteristics. Methodology. At mathematical modelling are considered functional, mass, dimensional and cost indexes of reducers and transformers that allows observing engineering and economic aspects of speed-controlled induction electric drives. The mathematical models used for examination of the transitive electromagnetic and electromechanical processes, are grounded on systems of nonlinear differential equations with nonlinear coefficients (parameters of equivalent circuits of motors, varying in each operating point, including owing to appearances of saturation of magnetic system and current displacement in a winding of a rotor of an induction motor. For the purpose of raise of level of adequacy of models a magnetic circuit iron, additional and mechanical losses are considered. Results. Modelling of the several speed-controlled induction electric drives, different by components, but working on a loading equal on character, magnitude and a demanded control range is executed. At use of characteristic families including mechanical, at various parameters of regulating on which performances of the load mechanism are superimposed, the adjusting characteristics representing dependences of a modification of electrical, energy and thermal magnitudes from an angular speed of motors are gained. Originality. The offered complex models of speed-controlled induction electric drives with matching reducers and transformers, give the chance to realize well-founded sampling of components of drives. They also can be used as the design models by working out of speed-controlled induction motors. Practical value. Operating characteristics of various speed-controlled induction electric

  14. A New Model for a Carpool Matching Service.

    Directory of Open Access Journals (Sweden)

    Jizhe Xia

    Full Text Available Carpooling is an effective means of reducing traffic. A carpool team shares a vehicle for their commute, which reduces the number of vehicles on the road during rush hour periods. Carpooling is officially sanctioned by most governments, and is supported by the construction of high-occupancy vehicle lanes. A number of carpooling services have been designed in order to match commuters into carpool teams, but it known that the determination of optimal carpool teams is a combinatorially complex problem, and therefore technological solutions are difficult to achieve. In this paper, a model for carpool matching services is proposed, and both optimal and heuristic approaches are tested to find solutions for that model. The results show that different solution approaches are preferred over different ranges of problem instances. Most importantly, it is demonstrated that a new formulation and associated solution procedures can permit the determination of optimal carpool teams and routes. An instantiation of the model is presented (using the street network of Guangzhou city, China to demonstrate how carpool teams can be determined.

  15. Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases.

    Science.gov (United States)

    Kelley, Brendan P; Klochko, Chad; Halabi, Safwan; Siegal, Daniel

    2016-06-01

    Retrospective data mining has tremendous potential in research but is time and labor intensive. Current data mining software contains many advanced search features but is limited in its ability to identify patients who meet multiple complex independent search criteria. Simple keyword and Boolean search techniques are ineffective when more complex searches are required, or when a search for multiple mutually inclusive variables becomes important. This is particularly true when trying to identify patients with a set of specific radiologic findings or proximity in time across multiple different imaging modalities. Another challenge that arises in retrospective data mining is that much variation still exists in how image findings are described in radiology reports. We present an algorithmic approach to solve this problem and describe a specific use case scenario in which we applied our technique to a real-world data set in order to identify patients who matched several independent variables in our institution's picture archiving and communication systems (PACS) database.

  16. IMPROVING IMAGE MATCHING BY REDUCING SURFACE REFLECTIONS USING POLARISING FILTER TECHNIQUES

    Directory of Open Access Journals (Sweden)

    N. Conen

    2018-05-01

    Full Text Available In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera’s orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002 using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.

  17. Automated image-matching technique for comparative diagnosis of the liver on CT examination

    International Nuclear Information System (INIS)

    Okumura, Eiichiro; Sanada, Shigeru; Suzuki, Masayuki; Tsushima, Yoshito; Matsui, Osamu

    2005-01-01

    When interpreting enhanced computer tomography (CT) images of the upper abdomen, radiologists visually select a set of images of the same anatomical positions from two or more CT image series (i.e., non-enhanced and contrast-enhanced CT images at arterial and delayed phase) to depict and to characterize any abnormalities. The same process is also necessary to create subtraction images by computer. We have developed an automated image selection system using a template-matching technique that allows the recognition of image sets at the same anatomical position from two CT image series. Using the template-matching technique, we compared several anatomical structures in each CT image at the same anatomical position. As the position of the liver may shift according to respiratory movement, not only the shape of the liver but also the gallbladder and other prominent structures included in the CT images were compared to allow appropriate selection of a set of CT images. This novel technique was applied in 11 upper abdominal CT examinations. In CT images with a slice thickness of 7.0 or 7.5 mm, the percentage of image sets selected correctly by the automated procedure was 86.6±15.3% per case. In CT images with a slice thickness of 1.25 mm, the percentages of correct selection of image sets by the automated procedure were 79.4±12.4% (non-enhanced and arterial-phase CT images) and 86.4±10.1% (arterial- and delayed-phase CT images). This automated method is useful for assisting in interpreting CT images and in creating digital subtraction images. (author)

  18. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    Science.gov (United States)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  19. Mouse models for gastric cancer: Matching models to biological questions

    Science.gov (United States)

    Poh, Ashleigh R; O'Donoghue, Robert J J

    2016-01-01

    Abstract Gastric cancer is the third leading cause of cancer‐related mortality worldwide. This is in part due to the asymptomatic nature of the disease, which often results in late‐stage diagnosis, at which point there are limited treatment options. Even when treated successfully, gastric cancer patients have a high risk of tumor recurrence and acquired drug resistance. It is vital to gain a better understanding of the molecular mechanisms underlying gastric cancer pathogenesis to facilitate the design of new‐targeted therapies that may improve patient survival. A number of chemically and genetically engineered mouse models of gastric cancer have provided significant insight into the contribution of genetic and environmental factors to disease onset and progression. This review outlines the strengths and limitations of current mouse models of gastric cancer and their relevance to the pre‐clinical development of new therapeutics. PMID:26809278

  20. The comparison of Co-60 and 4MV photons matching dosimetry during half-beam technique

    International Nuclear Information System (INIS)

    Cakir, Aydin; Bilge, Hatice; Dadasbilge, Alpar; Kuecuecuek, Halil; Okutan, Murat; Merdan Fayda, Emre

    2005-01-01

    In this phantom study, we tried to compare matching dosimetry differences between half-blocking of Co-60 and asymmetric collimation of the 4MV photons during craniospinal irradiation. The dose distributions are compared and discussed. Firstly, some gaps with different sizes are left between cranial and spinal field borders. Secondly, the fields are overlapped in the same sizes. We irradiate the films located in water-equivalent solid phantoms with Co-60 and 4MV photon beams. This study indicates that the field placement errors in +/- 1mm are acceptable for both Co-60 and 4MV photon energies during craniospinal irradiation with half-beam block technique. Within these limits the dose variations are specified in +/- 5%. However, the setup errors that are more than 1mm are unacceptable for both asymmetric collimation of 4MV photon and half-blocking of Co-60

  1. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  2. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  3. IMPROVED TOPOGRAPHIC MODELS VIA CONCURRENT AIRBORNE LIDAR AND DENSE IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    G. Mandlburger

    2017-09-01

    Full Text Available Modern airborne sensors integrate laser scanners and digital cameras for capturing topographic data at high spatial resolution. The capability of penetrating vegetation through small openings in the foliage and the high ranging precision in the cm range have made airborne LiDAR the prime terrain acquisition technique. In the recent years dense image matching evolved rapidly and outperforms laser scanning meanwhile in terms of the achievable spatial resolution of the derived surface models. In our contribution we analyze the inherent properties and review the typical processing chains of both acquisition techniques. In addition, we present potential synergies of jointly processing image and laser data with emphasis on sensor orientation and point cloud fusion for digital surface model derivation. Test data were concurrently acquired with the RIEGL LMS-Q1560 sensor over the city of Melk, Austria, in January 2016 and served as basis for testing innovative processing strategies. We demonstrate that (i systematic effects in the resulting scanned and matched 3D point clouds can be minimized based on a hybrid orientation procedure, (ii systematic differences of the individual point clouds are observable at penetrable, vegetated surfaces due to the different measurement principles, and (iii improved digital surface models can be derived combining the higher density of the matching point cloud and the higher reliability of LiDAR point clouds, especially in the narrow alleys and courtyards of the study site, a medieval city.

  4. A dynamic supraclavicular field-matching technique for head-and-neck cancer patients treated with IMRT

    International Nuclear Information System (INIS)

    Duan, Jun; Shen Sui; Spencer, Sharon A.; Ahmed, Raef S.; Popple, Richard A.; Ye, Sung-Joon; Brezovich, Ivan A.

    2004-01-01

    Purpose: The conventional single-isocenter and half-beam (SIHB) technique for matching supraclavicular fields with head-and-neck (HN) intensity-modulated radiotherapy (IMRT) fields is subject to substantial dose inhomogeneities from imperfect accelerator jaw/MLC calibration. It also limits the isocenter location and restricts the useful field size for IMRT. We propose a dynamic field-matching technique to overcome these limitations. Methods and materials: The proposed dynamic field-matching technique makes use of wedge junctions for the abutment of supraclavicular and HN IMRT fields. The supraclavicular field was shaped with a multileaf collimator (MLC), which was orientated such that the leaves traveled along the superoinferior direction. The leaves that defined the superior field border moved continuously during treatment from 1.5 cm below to 1.5 cm above the conventional match line to generate a 3-cm-wide wedge-shaped junction. The HN IMRT fields were optimized by taking into account the dose contribution from the supraclavicular field to the junction area, which generates a complementary wedge to produce a smooth junction in the abutment region. This technique was evaluated on a polystyrene phantom and 10 HN cancer patients. Treatment plans were generated for the phantom and the 10 patients. Dose profiles across the abutment region were measured in the phantom on films. For patient plans, dose profiles that passed through the center of the neck lymph nodes were calculated using the proposed technique and the SIHB technique, and dose uniformity in the abutment region was compared. Field mismatches of ± 1 mm and ± 2 mm because of imperfect jaw/MLC calibration were simulated, and the resulting dose inhomogeneities were studied for the two techniques with film measurements and patient plans. Three-dimensional volumetric doses were analyzed, and equivalent uniform doses (EUD) were computed. The effect of field mismatches on EUD was compared for the two match

  5. Text Character Extraction Implementation from Captured Handwritten Image to Text Conversionusing Template Matching Technique

    Directory of Open Access Journals (Sweden)

    Barate Seema

    2016-01-01

    Full Text Available Images contain various types of useful information that should be extracted whenever required. A various algorithms and methods are proposed to extract text from the given image, and by using that user will be able to access the text from any image. Variations in text may occur because of differences in size, style,orientation, alignment of text, and low image contrast, composite backgrounds make the problem during extraction of text. If we develop an application that extracts and recognizes those texts accurately in real time, then it can be applied to many important applications like document analysis, vehicle license plate extraction, text- based image indexing, etc and many applications have become realities in recent years. To overcome the above problems we develop such application that will convert the image into text by using algorithms, such as bounding box, HSV model, blob analysis,template matching, template generation.

  6. Uniform stable conformal convolutional perfectly matched layer for enlarged cell technique conformal finite-difference time-domain method

    International Nuclear Information System (INIS)

    Wang Yue; Wang Jian-Guo; Chen Zai-Gao

    2015-01-01

    Based on conformal construction of physical model in a three-dimensional Cartesian grid, an integral-based conformal convolutional perfectly matched layer (CPML) is given for solving the truncation problem of the open port when the enlarged cell technique conformal finite-difference time-domain (ECT-CFDTD) method is used to simulate the wave propagation inside a perfect electric conductor (PEC) waveguide. The algorithm has the same numerical stability as the ECT-CFDTD method. For the long-time propagation problems of an evanescent wave in a waveguide, several numerical simulations are performed to analyze the reflection error by sweeping the constitutive parameters of the integral-based conformal CPML. Our numerical results show that the integral-based conformal CPML can be used to efficiently truncate the open port of the waveguide. (paper)

  7. Fingerprint Matching by Thin-plate Spline Modelling of Elastic Deformations

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2003-01-01

    This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching

  8. A matching-allele model explains host resistance to parasites.

    Science.gov (United States)

    Luijckx, Pepijn; Fienberg, Harris; Duneau, David; Ebert, Dieter

    2013-06-17

    The maintenance of genetic variation and sex despite its costs has long puzzled biologists. A popular idea, the Red Queen Theory, is that under rapid antagonistic coevolution between hosts and their parasites, the formation of new rare host genotypes through sex can be advantageous as it creates host genotypes to which the prevailing parasite is not adapted. For host-parasite coevolution to lead to an ongoing advantage for rare genotypes, parasites should infect specific host genotypes and hosts should resist specific parasite genotypes. The most prominent genetics capturing such specificity are matching-allele models (MAMs), which have the key feature that resistance for two parasite genotypes can reverse by switching one allele at one host locus. Despite the lack of empirical support, MAMs have played a central role in the theoretical development of antagonistic coevolution, local adaptation, speciation, and sexual selection. Using genetic crosses, we show that resistance of the crustacean Daphnia magna against the parasitic bacterium Pasteuria ramosa follows a MAM. Simulation results show that the observed genetics can explain the maintenance of genetic variation and contribute to the maintenance of sex in the facultatively sexual host as predicted by the Red Queen Theory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Renewal of Road Networks Using Map-matching Technique of Trajectories

    Directory of Open Access Journals (Sweden)

    WU Tao

    2017-04-01

    Full Text Available The road network with complete and accurate information, as one of the key foundations of Smart City, bears significance in fields like urban planning, traffic managing and public traveling, et al. However, long manufacturing period of road network data, based on traditional surveying methods, often leaves it in an inconsistent state with the latest situation. Recently, positioning techniques ubiquitously used in mobile devices has been gradually coming into focus for domestic and overseas scholars. Currently, most of approaches, generating or updating road networks from mobile location information, are to compute with GPS trajectory data directly by various algorithms, which lead to expensive consumption of computational resources in case of mass GPS data covering large-scale areas. For this reason, we propose a spiral update strategy of road network data based on map-matching technology, which follows a “identify→analyze→extract→update” process. The main idea is to detect condemned road segments of existing road network data with the help of HMM for each trajectory input, as well as repair them, on the local scale, by extracting new road information from trajectory data.The proposed approach avoids computing on the entire dataset of trajectory data for road segments. Instead, it updates information of existing road network data by means of focalizing on the minimum range of potential condemned segments. We evaluated the performance of our proposals using GPS traces collected on taxies and OpenStreetMap(OSM road networks covering urban areas of Wuhan City.

  10. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  11. Data Matching Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection

    CERN Document Server

    Christen, Peter

    2012-01-01

    Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of da

  12. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control...... system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effectiveness of the proposed method, a robot system subjected to failures is used to demonstrate...

  13. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  14. Generating Models of a Matched Formula with a Polynomial Delay

    Czech Academy of Sciences Publication Activity Database

    Savický, Petr; Kučera, P.

    2016-01-01

    Roč. 56, č. 6 (2016), s. 379-402 ISSN 1076-9757 R&D Projects: GA ČR GBP202/12/G061 Grant - others:GA ČR(CZ) GA15-15511S Institutional support: RVO:67985807 Keywords : conjunctive normal form * matched formula * pure literal satisfiable formula Subject RIV: BA - General Mathematics Impact factor: 2.284, year: 2016

  15. Business models for open innovation: Matching heterogeneous open innovation strategies with business model dimensions

    OpenAIRE

    Saebi, Tina; Foss, Nicolai Juul

    2015-01-01

    -This is the author's version of the article:"Business models for open innovation: Matching heterogeneous open innovation strategies with business model dimensions", European Management Journal, Volume 33, Issue 3, June 2015, Pages 201–213 Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies' business models are not attuned to open strategies. Ac...

  16. Circuit and Measurement Technique for Radiation Induced Drift in Precision Capacitance Matching

    Science.gov (United States)

    Prasad, Sudheer; Shankar, Krishnamurthy Ganapathy

    2013-04-01

    In the design of radiation tolerant precision ADCs targeted for space market, a matched capacitor array is crucial. The drift of capacitance ratios due to radiation should be small enough not to cause linearity errors. Conventional methods for measuring capacitor matching may not achieve the desired level of accuracy due to radiation induced gain errors in the measurement circuits. In this work, we present a circuit and method for measuring capacitance ratio drift to a very high accuracy (<; 1 ppm) unaffected by radiation levels up to 150 krad.

  17. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League

    NARCIS (Netherlands)

    Koopman, S.J.; Lit, R.

    2015-01-01

    Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results

  18. A match-mismatch test of a stage model of behaviour change in tobacco smoking

    NARCIS (Netherlands)

    Dijkstra, A; Conijn, B; De Vries, H

    Aims An innovation offered by stage models of behaviour change is that of stage-matched interventions. Match-mismatch studies are the primary test of this idea but also the primary test of the validity of stage models. This study aimed at conducting such a test among tobacco smokers using the Social

  19. Scientist Role Models in the Classroom: How Important Is Gender Matching?

    Science.gov (United States)

    Conner, Laura D. Carsten; Danielson, Jennifer

    2016-01-01

    Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used…

  20. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models...

  1. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo, E-mail: kes7741@snu.ac.kr

    2015-04-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches.

  2. Matching-index-of-refraction of transparent 3D printing models for flow visualization

    International Nuclear Information System (INIS)

    Song, Min Seop; Choi, Hae Yoon; Seong, Jee Hyun; Kim, Eung Soo

    2015-01-01

    Matching-index-of-refraction (MIR) has been used for obtaining high-quality flow visualization data for the fundamental nuclear thermal-hydraulic researches. By this method, distortions of the optical measurements such as PIV and LDV have been successfully minimized using various combinations of the model materials and the working fluids. This study investigated a novel 3D printing technology for manufacturing models and an oil-based working fluid for matching the refractive indices. Transparent test samples were fabricated by various rapid prototyping methods including selective layer sintering (SLS), stereolithography (SLA), and vacuum casting. As a result, the SLA direct 3D printing was evaluated to be the most suitable for flow visualization considering manufacturability, transparency, and refractive index. In order to match the refractive indices of the 3D printing models, a working fluid was developed based on the mixture of herb essential oils, which exhibit high refractive index, high transparency, high density, low viscosity, low toxicity, and low price. The refractive index and viscosity of the working fluid range 1.453–1.555 and 2.37–6.94 cP, respectively. In order to validate the MIR method, a simple test using a twisted prism made by the SLA technique and the oil mixture (anise and light mineral oil) was conducted. The experimental results show that the MIR can be successfully achieved at the refractive index of 1.51, and the proposed MIR method is expected to be widely used for flow visualization studies and CFD validation for the nuclear thermal-hydraulic researches

  3. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  4. Transanal pullthrough for Hirschsprung disease: matched case-control comparison of Soave and Swenson techniques.

    Science.gov (United States)

    Nasr, Ahmed; Haricharan, Ramanath N; Gamarnik, Julie; Langer, Jacob C

    2014-05-01

    Both the Swenson and the Soave procedures have been adapted to a transanal approach. The purpose of this study was to compare outcomes following the transanal Swenson and Soave procedures using a matched case control analysis. A retrospective chart review was performed to identify all transanal Soave and Swenson pullthroughs done at 2 tertiary care children's hospitals between 2000 and 2010. Patients were matched for gestational age, mean weight at time of the operation, level of aganglionosis, and presence of co-morbidities. Student's t-test and chi-squared analysis were performed. Fifty-four patients (Soave 27, Swenson 27) had adequate data for matching and analysis. Mean follow-up was 4±1.6 years and 3.2 ±2.7 years for the Soave and Swenson groups, respectively. No significant differences in mean operating time (Soave:191±55, Swenson:167±61 min, p=0.6), overall hospital stay (6±4 vs 7.8±5 days, p=0.7), and number with intra-operative complications (3 vs 4, p=1.0), post-operative obstructive symptoms (6 vs 9, p=0.5), enterocolitis episodes (4 vs 4, p=1.0), or fecal incontinence (0 vs 2, p=0.4) were noted. After controlling for potential confounders, there were no significant differences in the short and intermediate term outcome between transanal Soave and transanal Swenson pullthrough procedures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    Energy Technology Data Exchange (ETDEWEB)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.; Czekala, Ian; Bailey, Vanessa P.; Follette, Katherine B. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA, 94305 (United States); Wang, Jason J.; Rosa, Robert J. De; Duchêne, Gaspard [Astronomy Department, University of California, Berkeley CA, 94720 (United States); Pueyo, Laurent [Space Telescope Science Institute, Baltimore, MD, 21218 (United States); Marley, Mark S. [NASA Ames Research Center, Mountain View, CA, 94035 (United States); Arriaga, Pauline; Fitzgerald, Michael P. [Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095 (United States); Barman, Travis [Lunar and Planetary Laboratory, University of Arizona, Tucson AZ, 85721 (United States); Bulger, Joanna [Subaru Telescope, NAOJ, 650 North A’ohoku Place, Hilo, HI 96720 (United States); Chilcote, Jeffrey [Dunlap Institute for Astronomy and Astrophysics, University of Toronto, Toronto, ON, M5S 3H4 (Canada); Cotten, Tara [Department of Physics and Astronomy, University of Georgia, Athens, GA, 30602 (United States); Doyon, Rene [Institut de Recherche sur les Exoplanètes, Départment de Physique, Université de Montréal, Montréal QC, H3C 3J7 (Canada); Gerard, Benjamin L. [University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 5C2 (Canada); Goodsell, Stephen J., E-mail: jruffio@stanford.edu [Gemini Observatory, 670 N. A’ohoku Place, Hilo, HI, 96720 (United States); and others

    2017-06-10

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  6. Weak Memory Models with Matching Axiomatic and Operational Definitions

    OpenAIRE

    Zhang, Sizhuo; Vijayaraghavan, Muralidaran; Lustig, Dan; Arvind

    2017-01-01

    Memory consistency models are notorious for being difficult to define precisely, to reason about, and to verify. More than a decade of effort has gone into nailing down the definitions of the ARM and IBM Power memory models, and yet there still remain aspects of those models which (perhaps surprisingly) remain unresolved to this day. In response to these complexities, there has been somewhat of a recent trend in the (general-purpose) architecture community to limit new memory models to being ...

  7. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  8. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data

  9. Chaotic Planning Solutions in the Textbook Model of Labor Market Search and Matching

    NARCIS (Netherlands)

    Bhattacharya, J.; Bunzel, H.

    2003-01-01

    This paper demonstrates that cyclical and chaotic planning solutions are possible in the standard textbook model of search and matching in labor markets. More specifically, it takes a discretetime adaptation of the continuous-time matching economy described in Pissarides (1990, 2001), and computes

  10. An investigation of matched index of refraction technique and its application in optical measurements of fluid flow

    Science.gov (United States)

    Amini, Noushin; Hassan, Yassin A.

    2012-12-01

    Optical distortions caused by non-uniformities of the refractive index within the measurement volume is a major impediment for all laser diagnostic imaging techniques applied in experimental fluid dynamic studies. Matching the refractive indices of the working fluid and the test section walls and interfaces provides an effective solution to this problem. The experimental set-ups designed to be used along with laser imaging techniques are typically constructed of transparent solid materials. In this investigation, different types of aqueous salt solutions and various organic fluids are studied for refractive index matching with acrylic and fused quartz, which are commonly used in construction of the test sections. One aqueous CaCl2·2H2O solution (63 % by weight) and two organic fluids, Dibutyl Phthalate and P-Cymene, are suggested for refractive index matching with fused quartz and acrylic, respectively. Moreover, the temperature dependence of the refractive indices of these fluids is investigated, and the Thermooptic Constant is calculated for each fluid. Finally, the fluid viscosity for different shear rates is measured as a function of temperature and is applied to characterize the physical behavior of the proposed fluids.

  11. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  12. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    Science.gov (United States)

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  13. Conventional QT Variability Measurement vs. Template Matching Techniques: Comparison of Performance Using Simulated and Real ECG

    Science.gov (United States)

    Baumert, Mathias; Starc, Vito; Porta, Alberto

    2012-01-01

    Increased beat-to-beat variability in the QT interval (QTV) of ECG has been associated with increased risk for sudden cardiac death, but its measurement is technically challenging and currently not standardized. The aim of this study was to investigate the performance of commonly used beat-to-beat QT interval measurement algorithms. Three different methods (conventional, template stretching and template time shifting) were subjected to simulated data featuring typical ECG recording issues (broadband noise, baseline wander, amplitude modulation) and real short-term ECG of patients before and after infusion of sotalol, a QT interval prolonging drug. Among the three algorithms, the conventional algorithm was most susceptible to noise whereas the template time shifting algorithm showed superior overall performance on simulated and real ECG. None of the algorithms was able to detect increased beat-to-beat QT interval variability after sotalol infusion despite marked prolongation of the average QT interval. The QTV estimates of all three algorithms were inversely correlated with the amplitude of the T wave. In conclusion, template matching algorithms, in particular the time shifting algorithm, are recommended for beat-to-beat variability measurement of QT interval in body surface ECG. Recording noise, T wave amplitude and the beat-rejection strategy are important factors of QTV measurement and require further investigation. PMID:22860030

  14. Conventional QT variability measurement vs. template matching techniques: comparison of performance using simulated and real ECG.

    Directory of Open Access Journals (Sweden)

    Mathias Baumert

    Full Text Available Increased beat-to-beat variability in the QT interval (QTV of ECG has been associated with increased risk for sudden cardiac death, but its measurement is technically challenging and currently not standardized. The aim of this study was to investigate the performance of commonly used beat-to-beat QT interval measurement algorithms. Three different methods (conventional, template stretching and template time shifting were subjected to simulated data featuring typical ECG recording issues (broadband noise, baseline wander, amplitude modulation and real short-term ECG of patients before and after infusion of sotalol, a QT interval prolonging drug. Among the three algorithms, the conventional algorithm was most susceptible to noise whereas the template time shifting algorithm showed superior overall performance on simulated and real ECG. None of the algorithms was able to detect increased beat-to-beat QT interval variability after sotalol infusion despite marked prolongation of the average QT interval. The QTV estimates of all three algorithms were inversely correlated with the amplitude of the T wave. In conclusion, template matching algorithms, in particular the time shifting algorithm, are recommended for beat-to-beat variability measurement of QT interval in body surface ECG. Recording noise, T wave amplitude and the beat-rejection strategy are important factors of QTV measurement and require further investigation.

  15. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  16. Novel method for the production of spin-aligned RI beams in projectile fragmentation reaction with the dispersion matching technique

    Energy Technology Data Exchange (ETDEWEB)

    Ichikawa, Y., E-mail: yuichikawa@phys.titech.ac.jp [Tokyo Institute of Technology, Department of Physics (Japan); Ueno, H. [RIKEN Nishina Center (Japan); Ishii, Y. [Tokyo Institute of Technology, Department of Physics (Japan); Furukawa, T. [Tokyo Metropolitan University, Department of Physics (Japan); Yoshimi, A. [Okayama University, Research Core for Extreme Quantum World (Japan); Kameda, D.; Watanabe, H.; Aoi, N. [RIKEN Nishina Center (Japan); Asahi, K. [Tokyo Institute of Technology, Department of Physics (Japan); Balabanski, D. L. [Bulgarian Academy of Sciences, Institute for Nuclear Research and Nuclear Energy (Bulgaria); Chevrier, R.; Daugas, J. M. [CEA, DAM, DIF (France); Fukuda, N. [RIKEN Nishina Center (Japan); Georgiev, G. [CSNSM, IN2P3-CNRS, Universite Paris-sud (France); Hayashi, H.; Iijima, H. [Tokyo Institute of Technology, Department of Physics (Japan); Inabe, N. [RIKEN Nishina Center (Japan); Inoue, T. [Tokyo Institute of Technology, Department of Physics (Japan); Ishihara, M.; Kubo, T. [RIKEN Nishina Center (Japan); and others

    2013-05-15

    A novel method to produce spin-aligned rare-isotope (RI) beam has been developed, that is the two-step projectile fragmentation method with a technique of dispersion matching. The present method was verified in an experiment at the RIKEN RIBF, where an RI beam of {sup 32}Al with spin alignment of 8(1) % was successfully produced from a primary beam of {sup 48}Ca, with {sup 33}Al as an intermediate nucleus. Figure of merit of the present method was found to be improved by a factor larger than 50 compared with a conventional method employing single-step projectile fragmentation.

  17. TECHNIQUES AND TACTICS IN BASKETBALL ACCORDING TO THE INTENSITY IN OFFICIAL MATCHES

    Directory of Open Access Journals (Sweden)

    José Francisco Daniel

    Full Text Available ABSTRACT Introduction: Basketball is characterized as an intermittent sport in which currently stand out the highest intensity in which the actions occur, demanding for sport performance the optimum and homogeneous development of physical, technical, tactical, psychological and intellectual components. In this sense, the understanding of the game according to the technical and tactical actions performed and the knowledge of body’s responses are important for planning, monitoring and control of the training. Objective: The aim of this study was to describe the intensity of basketball tactical actions and the relationships between technical actions and intensity during the different game periods (GP. Methods: Ten athletes of the Brazilian male basketball elite participated in this study (27.60±5.54 years, 192.62±7.63 cm, 91.60±11.51 kg, 10.66±4.11% of body fat in six official matches of the National Basketball League (LNB, Brazil. Anthropometric measures and motor tests were performed and tactical (defensive, offensive and transition, technical [shares number (SN and efficiency ratio (ER] and physical actions [percentage of lactate threshold heart rate (%HRthr] were correlated. Spearman’s correlation coefficient was used between SN, ER and %HRthr. Results: The main results point to: (1 positive and significant relationship (except the 4th GP between SN, ER and %HRthr; (2 tactical actions presented HR near the lactate threshold, being apparently the highest median for the transitions (107.4%HRthr. Conclusion: The game is intense, with moments of HRpeak, but the median is slightly above of HRthr, that it is where the best relationship between SN and ER occurs.

  18. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    Science.gov (United States)

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  19. Is There a Purchase Limit on Regional Growth? A Quasi-experimental Evaluation of Investment Grants Using Matching Techniques

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel; Paloyo, Alfredo R.; Alecke, Björn

    2015-01-01

    In this article, we apply recent advances in quasi-experimental estimation methods to analyze the effectiveness of Germany’s large-scale regional policy instrument, the joint Federal Government/State Programme “Gemeinschaftsaufgabe Verbesserung der regionalen Wirtschaftsstruktur” (GRW), which is ...... of matching techniques in regional data settings. Overall, however, the matching approach can still be considered of great value for regional policy analysis and should be the subject of future research efforts in the field of empirical regional science.......In this article, we apply recent advances in quasi-experimental estimation methods to analyze the effectiveness of Germany’s large-scale regional policy instrument, the joint Federal Government/State Programme “Gemeinschaftsaufgabe Verbesserung der regionalen Wirtschaftsstruktur” (GRW), which...... is a means to foster labor-productivity growth in lagging regions. In particular, adopting binary and generalized propensity-score matching methods, our results indicate that the GRW can be generally considered effective. However, we find evidence for a nonlinear relationship between GRW funding and regional...

  20. Matching Index-of-Refraction for 3D Printing Model Using Mixture of Herb Essential Oil and Light Mineral Oil

    International Nuclear Information System (INIS)

    Song, Min Seop; Choi, Hae Yoon; Kim, Eung Soo

    2013-01-01

    This study has extensively investigated the emerging 3-D printing technologies for use of MIR-based flow field visualization methods such as PIV and LDV. As a result, mixture of Herb essential oil and light mineral oil has been evaluated to be great working fluid due to its adequate properties. Using this combination, the RIs between 1.45 and 1.55 can be accurately matched, and most of the transparent materials are found to be ranged in here. Conclusively, the proposed MIR method are expected to provide large flexibility of model materials and geometries for laser based optical measurements. Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV) are the two major optical technologies used for flow field visualization in the latest fundamental thermal-hydraulics researches. Those techniques seriously require minimizing optical distortions for enabling high quality data. Therefore, matching index of refraction (MIR) between model materials and working fluids are an essential part of minimizing measurement uncertainty. This paper proposes to use 3-D Printing technology for manufacturing models for the MIR-based optical measurements. Because of the large flexibility in geometries and materials of the 3-D Printing, its application is obviously expected to provide tremendous advantages over the traditional MIR-based optical measurements. This study focuses on the 3-D printing models and investigates their optical properties, transparent printing techniques, and index-matching fluids

  1. Matching Index-of-Refraction for 3D Printing Model Using Mixture of Herb Essential Oil and Light Mineral Oil

    Energy Technology Data Exchange (ETDEWEB)

    Song, Min Seop; Choi, Hae Yoon; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2013-10-15

    This study has extensively investigated the emerging 3-D printing technologies for use of MIR-based flow field visualization methods such as PIV and LDV. As a result, mixture of Herb essential oil and light mineral oil has been evaluated to be great working fluid due to its adequate properties. Using this combination, the RIs between 1.45 and 1.55 can be accurately matched, and most of the transparent materials are found to be ranged in here. Conclusively, the proposed MIR method are expected to provide large flexibility of model materials and geometries for laser based optical measurements. Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV) are the two major optical technologies used for flow field visualization in the latest fundamental thermal-hydraulics researches. Those techniques seriously require minimizing optical distortions for enabling high quality data. Therefore, matching index of refraction (MIR) between model materials and working fluids are an essential part of minimizing measurement uncertainty. This paper proposes to use 3-D Printing technology for manufacturing models for the MIR-based optical measurements. Because of the large flexibility in geometries and materials of the 3-D Printing, its application is obviously expected to provide tremendous advantages over the traditional MIR-based optical measurements. This study focuses on the 3-D printing models and investigates their optical properties, transparent printing techniques, and index-matching fluids.

  2. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  3. eMatchSite: sequence order-independent structure alignments of ligand binding pockets in protein models.

    Directory of Open Access Journals (Sweden)

    Michal Brylinski

    2014-09-01

    Full Text Available Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4-9% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.

  4. Production Efficiency and Market Orientation in Food Crops in North West Ethiopia: Application of Matching Technique for Impact Assessment.

    Directory of Open Access Journals (Sweden)

    Habtamu Yesigat Ayenew

    Full Text Available Agricultural technologies developed by national and international research institutions were not benefiting the rural population of Ethiopia to the extent desired. As a response, integrated agricultural extension approaches are proposed as a key strategy to transform the smallholder farming sector. Improving Productivity and Market Success (IPMS of Ethiopian Farmers project is one of the development projects initiated by integrating productivity enhancement technological schemes with market development model. This paper explores the impact of the project intervention in the smallholder farmers' wellbeing.To test the research hypothesis of whether the project brought a significant change in the input use, marketed surplus, efficiency and income of farm households, we use a cross-section data from 200 smallholder farmers in Northwest Ethiopia, collected through multi-stage sampling procedure. To control for self-selection from observable characteristics of the farm households, we employ Propensity Score Matching (PSM. We finally use Data Envelopment Analysis (DEA techniques to estimate technical efficiency of farm households.The outcome of the research is in line with the premises that the participation of the household in the IPMS project improves purchased input use, marketed surplus, efficiency of farms and the overall gain from farming. The participant households on average employ more purchased agricultural inputs and gain higher gross margin from the production activities as compared to the non-participant households. The non-participant households on average supply less output (measured both in monetary terms and proportion of total produce to the market as compared to their participant counterparts. Except for the technical efficiency of production in potato, project participant households are better-off in production efficiency compared with the non-participant counterparts.We verified the idea that Improving Productivity and Market

  5. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  6. Adaptive technique for matching the spectral response in skin lesions' images

    International Nuclear Information System (INIS)

    Pavlova, P; Borisova, E; Avramov, L; Pavlova, E

    2015-01-01

    The suggested technique is a subsequent stage for data obtaining from diffuse reflectance spectra and images of diseased tissue with a final aim of skin cancer diagnostics. Our previous work allows us to extract patterns for some types of skin cancer, as a ratio between spectra, obtained from healthy and diseased tissue in the range of 380 – 780 nm region. The authenticity of the patterns depends on the tested point into the area of lesion, and the resulting diagnose could also be fixed with some probability. In this work, two adaptations are implemented to localize pixels of the image lesion, where the reflectance spectrum corresponds to pattern. First adapts the standard to the personal patient and second – translates the spectrum white point basis to the relative white point of the image. Since the reflectance spectra and the image pixels are regarding to different white points, a correction of the compared colours is needed. The latest is done using a standard method for chromatic adaptation. The technique follows the steps below: –Calculation the colorimetric XYZ parameters for the initial white point, fixed by reflectance spectrum from healthy tissue; –Calculation the XYZ parameters for the distant white point on the base of image of nondiseased tissue; –Transformation the XYZ parameters for the test-spectrum by obtained matrix; –Finding the RGB values of the XYZ parameters for the test-spectrum according sRGB; Finally, the pixels of the lesion's image, corresponding to colour from the test-spectrum and particular diagnostic pattern are marked with a specific colour

  7. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    Science.gov (United States)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  8. Wages, Training, and Job Turnover in a Search-Matching Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Nielsen, Michael Svarer

    1999-01-01

    In this paper we extend a job search-matching model with firm-specific investments in training developed by Mortensen (1998) to allow for different offer arrival rates in employment and unemployment. The model by Mortensen changes the original wage posting model (Burdett and Mortensen, 1998) in two...

  9. Wideband simulation of earthquake ground motion by a spectrum-matching, multiple-pulse technique

    International Nuclear Information System (INIS)

    Gusev, A.; Pavlov, V.

    2006-04-01

    To simulate earthquake ground motion, we combine a multiple-point stochastic earthquake fault model and a suite of Green functions. Conceptually, our source model generalizes the classic one of Haskell (1966). At any time instant, slip occurs over a narrow strip that sweeps the fault area at a (spatially variable) velocity. This behavior defines seismic signals at lower frequencies (LF), and describes directivity effects. High-frequency (HF) behavior of source signal is defined by local slip history, assumed to be a short segment of pulsed noise. For calculations, this model is discretized as a grid of point subsources. Subsource moment rate time histories, in their LF part, are smooth pulses whose duration equals to the rise time. In their HF part, they are segments of non-Gaussian noise of similar duration. The spectral content of subsource time histories is adjusted so that the summary far-field signal follows certain predetermined spectral scaling law. The results of simulation depend on random seeds, and on particular values of such parameters as: stress drop; average and dispersion parameter for rupture velocity; rupture nucleation point; slip zone width/rise time, wavenumber-spectrum parameter defining final slip function; the degrees of non-Gaussianity for random slip rate in time, and for random final slip in space, and more. To calculate ground motion at a site, Green functions are calculated for each subsource-site pair, then convolved with subsource time functions and at last summed over subsources. The original Green function calculator for layered weakly inelastic medium is of discrete wavenumber kind, with no intrinsic limitations with respect to layer thickness or bandwidth. The simulation package can generate example motions, or used to study uncertainties of the predicted motion. As a test, realistic analogues of recorded motions in the epicentral zone of the 1994 Northridge, California earthquake were synthesized, and related uncertainties were

  10. A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities

    Science.gov (United States)

    Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.

    2017-12-01

    The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.

  11. Adiabatic perturbations in pre-big bang models: Matching conditions and scale invariance

    International Nuclear Information System (INIS)

    Durrer, Ruth; Vernizzi, Filippo

    2002-01-01

    At low energy, the four-dimensional effective action of the ekpyrotic model of the universe is equivalent to a slightly modified version of the pre-big bang model. We discuss cosmological perturbations in these models. In particular we address the issue of matching the perturbations from a collapsing to an expanding phase. We show that, under certain physically motivated and quite generic assumptions on the high energy corrections, one obtains n=0 for the spectrum of scalar perturbations in the original pre-big bang model (with a vanishing potential). With the same assumptions, when an exponential potential for the dilaton is included, a scale invariant spectrum (n=1) of adiabatic scalar perturbations is produced under very generic matching conditions, both in a modified pre-big bang and ekpyrotic scenario. We also derive the resulting spectrum for arbitrary power law scale factors matched to a radiation-dominated era

  12. Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.

    Science.gov (United States)

    Salama, Mhd Suhyb; Su, Zhongbo

    2010-01-01

    A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.

  13. Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors

    Directory of Open Access Journals (Sweden)

    Mhd. Suhyb Salama

    2010-08-01

    Full Text Available A Bayesian model is developed to match aerospace ocean color observation tofield measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.

  14. A high-resolution processing technique for improving the energy of weak signal based on matching pursuit

    Directory of Open Access Journals (Sweden)

    Shuyan Wang

    2016-05-01

    Full Text Available This paper proposes a new method to improve the resolution of the seismic signal and to compensate the energy of weak seismic signal based on matching pursuit. With a dictionary of Morlet wavelets, matching pursuit algorithm can decompose a seismic trace into a series of wavelets. We abstract complex-trace attributes from analytical expressions to shrink the search range of amplitude, frequency and phase. In addition, considering the level of correlation between constituent wavelets and average wavelet abstracted from well-seismic calibration, we can obtain the search range of scale which is an important adaptive parameter to control the width of wavelet in time and the bandwidth of frequency. Hence, the efficiency of selection of proper wavelets is improved by making first a preliminary estimate and refining a local selecting range. After removal of noise wavelets, we integrate useful wavelets which should be firstly executed by adaptive spectral whitening technique. This approach can improve the resolutions of seismic signal and enhance the energy of weak wavelets simultaneously. The application results of real seismic data show this method has a good perspective of application.

  15. Crystallographic study of grain refinement in aluminum alloys using the edge-to-edge matching model

    International Nuclear Information System (INIS)

    Zhang, M.-X.; Kelly, P.M.; Easton, M.A.; Taylor, J.A.

    2005-01-01

    The edge-to-edge matching model for describing the interfacial crystallographic characteristics between two phases that are related by reproducible orientation relationships has been applied to the typical grain refiners in aluminum alloys. Excellent atomic matching between Al 3 Ti nucleating substrates, known to be effective nucleation sites for primary Al, and the Al matrix in both close packed directions and close packed planes containing these directions have been identified. The crystallographic features of the grain refiner and the Al matrix are very consistent with the edge-to-edge matching model. For three other typical grain refiners for Al alloys, TiC (when a = 0.4328 nm), TiB 2 and AlB 2 , the matching only occurs between the close packed directions in both phases and between the second close packed plane of the Al matrix and the second close packed plane of the refiners. According to the model, it is predicted that Al 3 Ti is a more powerful nucleating substrate for Al alloy than TiC, TiB 2 and AlB 2 . This agrees with the previous experimental results. The present work shows that the edge-to-edge matching model has the potential to be a powerful tool in discovering new and more powerful grain refiners for Al alloys

  16. Multicollinearity in associations between multiple environmental features and body weight and abdominal fat: using matching techniques to assess whether the associations are separable.

    Science.gov (United States)

    Leal, Cinira; Bean, Kathy; Thomas, Frédérique; Chaix, Basile

    2012-06-01

    Because of the strong correlations among neighborhoods' characteristics, it is not clear whether the associations of specific environmental exposures (e.g., densities of physical features and services) with obesity can be disentangled. Using data from the RECORD (Residential Environment and Coronary Heart Disease) Cohort Study (Paris, France, 2007-2008), the authors investigated whether neighborhood characteristics related to the sociodemographic, physical, service-related, and social-interactional environments were associated with body mass index and waist circumference. The authors developed an original neighborhood characteristic-matching technique (analyses within pairs of participants similarly exposed to an environmental variable) to assess whether or not these associations could be disentangled. After adjustment for individual/neighborhood socioeconomic variables, body mass index/waist circumference was negatively associated with characteristics of the physical/service environments reflecting higher densities (e.g., proportion of built surface, densities of shops selling fruits/vegetables, and restaurants). Multiple adjustment models and the neighborhood characteristic-matching technique were unable to identify which of these neighborhood variables were driving the associations because of high correlations between the environmental variables. Overall, beyond the socioeconomic environment, the physical and service environments may be associated with weight status, but it is difficult to disentangle the effects of strongly correlated environmental dimensions, even if they imply different causal mechanisms and interventions.

  17. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  18. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  19. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  20. Oncoplastic round block technique has comparable operative parameters as standard wide local excision: a matched case-control study.

    Science.gov (United States)

    Lim, Geok-Hoon; Allen, John Carson; Ng, Ruey Pyng

    2017-08-01

    Although oncoplastic breast surgery is used to resect larger tumors with lower re-excision rates compared to standard wide local excision (sWLE), criticisms of oncoplastic surgery include a longer-albeit, well concealed-scar, longer operating time and hospital stay, and increased risk of complications. Round block technique has been reported to be very suitable for patients with relatively smaller breasts and minimal ptosis. We aim to determine if round block technique will result in operative parameters comparable with sWLE. Breast cancer patients who underwent a round block procedure from 1st May 2014 to 31st January 2016 were included in the study. These patients were then matched for the type of axillary procedure, on a one to one basis, with breast cancer patients who had undergone sWLE from 1st August 2011 to 31st January 2016. The operative parameters between the 2 groups were compared. 22 patients were included in the study. Patient demographics and histologic parameters were similar in the 2 groups. No complications were reported in either group. The mean operating time was 122 and 114 minutes in the round block and sWLE groups, respectively (P=0.64). Length of stay was similar in the 2 groups (P=0.11). Round block patients had better cosmesis and lower re-excision rates. A higher rate of recurrence was observed in the sWLE group. The round block technique has comparable operative parameters to sWLE with no evidence of increased complications. Lower re-excision rate and better cosmesis were observed in the round block patients suggesting that the round block technique is not only comparable in general, but may have advantages to sWLE in selected cases.

  1. Analytical modelling of waveguide mode launchers for matched feed reflector systems

    DEFF Research Database (Denmark)

    Palvig, Michael Forum; Breinbjerg, Olav; Meincke, Peter

    2016-01-01

    Matched feed horns aim to cancel cross polarization generated in offset reflector systems. An analytical method for predicting the mode spectrum generated by inclusions in such horns, e.g. stubs and pins, is presented. The theory is based on the reciprocity theorem with the inclusions represented...... by current sources. The model is supported by Method of Moments calculations in GRASP and very good agreement is seen. The model gives rise to many interesting observations and ideas for new or improved mode launchers for matched feeds.......Matched feed horns aim to cancel cross polarization generated in offset reflector systems. An analytical method for predicting the mode spectrum generated by inclusions in such horns, e.g. stubs and pins, is presented. The theory is based on the reciprocity theorem with the inclusions represented...

  2. An algebraic method to develop well-posed PML models Absorbing layers, perfectly matched layers, linearized Euler equations

    International Nuclear Information System (INIS)

    Rahmouni, Adib N.

    2004-01-01

    In 1994, Berenger [Journal of Computational Physics 114 (1994) 185] proposed a new layer method: perfectly matched layer, PML, for electromagnetism. This new method is based on the truncation of the computational domain by a layer which absorbs waves regardless of their frequency and angle of incidence. Unfortunately, the technique proposed by Berenger (loc. cit.) leads to a system which has lost the most important properties of the original one: strong hyperbolicity and symmetry. We present in this paper an algebraic technique leading to well-known PML model [IEEE Transactions on Antennas and Propagation 44 (1996) 1630] for the linearized Euler equations, strongly well-posed, preserving the advantages of the initial method, and retaining symmetry. The technique proposed in this paper can be extended to various hyperbolic problems

  3. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  4. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  5. The match-mismatch model of emotion processing styles and emotion regulation strategies in fibromyalgia.

    NARCIS (Netherlands)

    Geenen, R.; Ooijen-van der Linden, L. van; Lumley, M.A.; Bijlsma, J.W.J.; Middendorp, H. van

    2012-01-01

    OBJECTIVE: Individuals differ in their style of processing emotions (e.g., experiencing affects intensely or being alexithymic) and their strategy of regulating emotions (e.g., expressing or reappraising). A match-mismatch model of emotion processing styles and emotion regulation strategies is

  6. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  7. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  8. Research on vehicles and cargos matching model based on virtual logistics platform

    Science.gov (United States)

    Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan

    2018-04-01

    Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.

  9. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  10. Dynamic Modeling of Starting Aerodynamics and Stage Matching in an Axi-Centrifugal Compressor

    Science.gov (United States)

    Wilkes, Kevin; OBrien, Walter F.; Owen, A. Karl

    1996-01-01

    A DYNamic Turbine Engine Compressor Code (DYNTECC) has been modified to model speed transients from 0-100% of compressor design speed. The impetus for this enhancement was to investigate stage matching and stalling behavior during a start sequence as compared to rotating stall events above ground idle. The model can simulate speed and throttle excursions simultaneously as well as time varying bleed flow schedules. Results of a start simulation are presented and compared to experimental data obtained from an axi-centrifugal turboshaft engine and companion compressor rig. Stage by stage comparisons reveal the front stages to be operating in or near rotating stall through most of the start sequence. The model matches the starting operating line quite well in the forward stages with deviations appearing in the rearward stages near the start bleed. Overall, the performance of the model is very promising and adds significantly to the dynamic simulation capabilities of DYNTECC.

  11. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  12. Tuning the climate sensitivity of a global model to match 20th Century warming

    Science.gov (United States)

    Mauritsen, T.; Roeckner, E.

    2015-12-01

    A climate models ability to reproduce observed historical warming is sometimes viewed as a measure of quality. Yet, for practical reasons historical warming cannot be considered a purely empirical result of the modelling efforts because the desired result is known in advance and so is a potential target of tuning. Here we explain how the latest edition of the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1.2) atmospheric model (ECHAM6.3) had its climate sensitivity systematically tuned to about 3 K; the MPI model to be used during CMIP6. This was deliberately done in order to improve the match to observed 20th Century warming over the previous model generation (MPI-ESM, ECHAM6.1) which warmed too much and had a sensitivity of 3.5 K. In the process we identified several controls on model cloud feedback that confirm recently proposed hypotheses concerning trade-wind cumulus and high-latitude mixed-phase clouds. We then evaluate the model fidelity with centennial global warming and discuss the relative importance of climate sensitivity, forcing and ocean heat uptake efficiency in determining the response as well as possible systematic biases. The activity of targeting historical warming during model development is polarizing the modeling community with 35 percent of modelers stating that 20th Century warming was rated very important to decisive, whereas 30 percent would not consider it at all. Likewise, opinions diverge as to which measures are legitimate means for improving the model match to observed warming. These results are from a survey conducted in conjunction with the first WCRP Workshop on Model Tuning in fall 2014 answered by 23 modelers. We argue that tuning or constructing models to match observed warming to some extent is practically unavoidable, and as such, in many cases might as well be done explicitly. For modeling groups that have the capability to tune both their aerosol forcing and climate sensitivity there is now a unique

  13. Cross-species genomics matches driver mutations and cell compartments to model ependymoma

    Science.gov (United States)

    Johnson, Robert A.; Wright, Karen D.; Poppleton, Helen; Mohankumar, Kumarasamypet M.; Finkelstein, David; Pounds, Stanley B.; Rand, Vikki; Leary, Sarah E.S.; White, Elsie; Eden, Christopher; Hogg, Twala; Northcott, Paul; Mack, Stephen; Neale, Geoffrey; Wang, Yong-Dong; Coyle, Beth; Atkinson, Jennifer; DeWire, Mariko; Kranenburg, Tanya A.; Gillespie, Yancey; Allen, Jeffrey C.; Merchant, Thomas; Boop, Fredrick A.; Sanford, Robert. A.; Gajjar, Amar; Ellison, David W.; Taylor, Michael D.; Grundy, Richard G.; Gilbertson, Richard J.

    2010-01-01

    Understanding the biology that underlies histologically similar but molecularly distinct subgroups of cancer has proven difficult since their defining genetic alterations are often numerous, and the cellular origins of most cancers remain unknown1–3. We sought to decipher this heterogeneity by integrating matched genetic alterations and candidate cells of origin to generate accurate disease models. First, we identified subgroups of human ependymoma, a form of neural tumor that arises throughout the central nervous system (CNS). Subgroup specific alterations included amplifications and homozygous deletions of genes not yet implicated in ependymoma. To select cellular compartments most likely to give rise to subgroups of ependymoma, we matched the transcriptomes of human tumors to those of mouse neural stem cells (NSCs), isolated from different regions of the CNS at different developmental stages, with an intact or deleted Ink4a/Arf locus. The transcriptome of human cerebral ependymomas with amplified EPHB2 and deleted INK4A/ARF matched only that of embryonic cerebral Ink4a/Arf−/− NSCs. Remarkably, activation of Ephb2 signaling in these, but not other NSCs, generated the first mouse model of ependymoma, which is highly penetrant and accurately models the histology and transcriptome of one subgroup of human cerebral tumor. Further comparative analysis of matched mouse and human tumors revealed selective deregulation in the expression and copy number of genes that control synaptogenesis, pinpointing disruption of this pathway as a critical event in the production of this ependymoma subgroup. Our data demonstrate the power of cross-species genomics to meticulously match subgroup specific driver mutations with cellular compartments to model and interrogate cancer subgroups. PMID:20639864

  14. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  15. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  16. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  17. [Application of an improved model of a job-matching platform for nurses].

    Science.gov (United States)

    Huang, Way-Ren; Lin, Chiou-Fen

    2015-04-01

    The three-month attrition rate for new nurses in Taiwan remains high. Many hospitals rely on traditional recruitment methods to find new nurses, yet it appears that their efficacy is less than ideal. To effectively solve this manpower shortage, a nursing resource platform is a project worth developing in the future. This study aimed to utilize a quality-improvement model to establish communication between hospitals and nursing students and create a customized employee-employer information-matching platform to help nursing students enter the workforce. This study was structured around a quality-improvement model and used current situation analysis, literature review, focus-group discussions, and process re-engineering to formulate necessary content for a job-matching platform for nursing. The concept of an academia-industry strategic alliance helped connect supply and demand within the same supply chain. The nurse job-matching platform created in this study provided job flexibility as well as job suitability assessments and continued follow-up and services for nurses after entering the workforce to provide more accurate matching of employers and employees. The academia-industry strategic alliance, job suitability, and long-term follow-up designed in this study are all new features in Taiwan's human resource service systems. The proposed human resource process re-engineering provides nursing students facing graduation with a professionally managed human resources platform. Allowing students to find an appropriate job prior to graduation will improve willingness to work and employee retention.

  18. Radiographic evaluation of the quality of root canal obturation of single-matched cone Gutta-percha root canal filling versus hot lateral technique

    Directory of Open Access Journals (Sweden)

    Randa Suleiman Obeidat

    2014-01-01

    Full Text Available Aim: The aim of this study is to evaluate radiographically the quality of root canal filling in mesiodistal and buccolingual view when comparing matched cone condensation and warm lateral Gutta-percha condensation using system B heating instrument in a low-heat warm lateral condensation technique in0 vitro. Materials and Methods: A total of 40 mandibular premolars with straight single canals were divided into two groups with 20 each. The root canals were shaped by hand file and Revo-S rotary files to size (25, 0.06 at the end point, then they filled by Gutta-percha cone and meta-seal sealer. In group A, a single matched cone technique was used to fill the root canals. In group B, a hot lateral condensation using system B instrument at 101°C was performed. Result: The result of this study showed no significant difference in density of Gutta-percha fill in apical and coronal two-third when comparing matched cone root canal filling and hot lateral technique (P > 0.05. The only significant difference (P < 0.05 was in matched cone between buccolingual and mesiodistal view in the coronal two-third. Conclusion: Within the limitation of this study, single matched cone technique has a good density in the apical one-third as that of the hot lateral technique so it may be used for filling narrow canals. In the coronal two-third of the root canal, single matched cone technique showed inferior density of root canal filling which can be improved by using accessory cones Gutta-percha in wide canal.

  19. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  20. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  1. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  2. Improving the precision of the keyword-matching pornographic text filtering method using a hybrid model.

    Science.gov (United States)

    Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong

    2004-09-01

    With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.

  3. In vivo kinematics of healthy male knees during squat and golf swing using image-matching techniques.

    Science.gov (United States)

    Murakami, Koji; Hamai, Satoshi; Okazaki, Ken; Ikebe, Satoru; Shimoto, Takeshi; Hara, Daisuke; Mizu-uchi, Hideki; Higaki, Hidehiko; Iwamoto, Yukihide

    2016-03-01

    Participation in specific activities requires complex ranges of knee movements and activity-dependent kinematics. The purpose of this study was to investigate dynamic knee kinematics during squat and golf swing using image-matching techniques. Five healthy males performed squats and golf swings under periodic X-ray images at 10 frames per second. We analyzed the in vivo three-dimensional kinematic parameters of subjects' knees, namely the tibiofemoral flexion angle, anteroposterior (AP) translation, and internal-external rotation, using serial X-ray images and computed tomography-derived, digitally reconstructed radiographs. During squat from 0° to 140° of flexion, the femur moved about 25 mm posteriorly and rotated 19° externally relative to the tibia. Screw-home movement near extension, bicondylar rollback between 20° and 120° of flexion, and medial pivot motion at further flexion were observed. During golf swing, the leading and trailing knees (the left and right knees respectively in the right-handed golfer) showed approximately five millimeters and four millimeters of AP translation with 18° and 26° of axial rotation, respectively. A central pivot motion from set-up to top of the backswing, lateral pivot motion from top to ball impact, and medial pivot motion from impact to the end of follow-through were observed. The medial pivot motion was not always recognized during both activities, but a large range of axial rotation with bilateral condylar AP translations occurs during golf swing. This finding has important implications regarding the amount of acceptable AP translation and axial rotation at low flexion in replaced knees. IV. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  5. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    Directory of Open Access Journals (Sweden)

    Eva Lykkegaard

    2016-04-01

    Full Text Available Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics. Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did.

  6. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  7. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  8. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    Science.gov (United States)

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-04-10

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  9. Matched-Filter Thermography

    Directory of Open Access Journals (Sweden)

    Nima Tabatabaei

    2018-04-01

    Full Text Available Conventional infrared thermography techniques, including pulsed and lock-in thermography, have shown great potential for non-destructive evaluation of broad spectrum of materials, spanning from metals to polymers to biological tissues. However, performance of these techniques is often limited due to the diffuse nature of thermal wave fields, resulting in an inherent compromise between inspection depth and depth resolution. Recently, matched-filter thermography has been introduced as a means for overcoming this classic limitation to enable depth-resolved subsurface thermal imaging and improving axial/depth resolution. This paper reviews the basic principles and experimental results of matched-filter thermography: first, mathematical and signal processing concepts related to matched-fileting and pulse compression are discussed. Next, theoretical modeling of thermal-wave responses to matched-filter thermography using two categories of pulse compression techniques (linear frequency modulation and binary phase coding are reviewed. Key experimental results from literature demonstrating the maintenance of axial resolution while inspecting deep into opaque and turbid media are also presented and discussed. Finally, the concept of thermal coherence tomography for deconvolution of thermal responses of axially superposed sources and creation of depth-selective images in a diffusion-wave field is reviewed.

  10. Matching of experimental and statistical-model thermonuclear reaction rates at high temperatures

    International Nuclear Information System (INIS)

    Newton, J. R.; Longland, R.; Iliadis, C.

    2008-01-01

    We address the problem of extrapolating experimental thermonuclear reaction rates toward high stellar temperatures (T>1 GK) by using statistical model (Hauser-Feshbach) results. Reliable reaction rates at such temperatures are required for studies of advanced stellar burning stages, supernovae, and x-ray bursts. Generally accepted methods are based on the concept of a Gamow peak. We follow recent ideas that emphasized the fundamental shortcomings of the Gamow peak concept for narrow resonances at high stellar temperatures. Our new method defines the effective thermonuclear energy range (ETER) by using the 8th, 50th, and 92nd percentiles of the cumulative distribution of fractional resonant reaction rate contributions. This definition is unambiguous and has a straightforward probability interpretation. The ETER is used to define a temperature at which Hauser-Feshbach rates can be matched to experimental rates. This matching temperature is usually much higher compared to previous estimates that employed the Gamow peak concept. We suggest that an increased matching temperature provides more reliable extrapolated reaction rates since Hauser-Feshbach results are more trustwhorthy the higher the temperature. Our ideas are applied to 21 (p,γ), (p,α), and (α,γ) reactions on A=20-40 target nuclei. For many of the cases studied here, our extrapolated reaction rates at high temperatures differ significantly from those obtained using the Gamow peak concept

  11. A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions

    International Nuclear Information System (INIS)

    Griffond, J; Soulard, O; Souffland, D

    2010-01-01

    To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.

  12. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  13. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  14. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    Science.gov (United States)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  15. IMLS-SLAM: scan-to-model matching based on 3D data

    OpenAIRE

    Deschaud, Jean-Emmanuel

    2018-01-01

    The Simultaneous Localization And Mapping (SLAM) problem has been well studied in the robotics community, especially using mono, stereo cameras or depth sensors. 3D depth sensors, such as Velodyne LiDAR, have proved in the last 10 years to be very useful to perceive the environment in autonomous driving, but few methods exist that directly use these 3D data for odometry. We present a new low-drift SLAM algorithm based only on 3D LiDAR data. Our method relies on a scan-to-model matching framew...

  16. mr. A C++ library for the matching and running of the Standard Model parameters

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Veretin, Oleg L.; Pikelner, Andrey F.; Joint Institute for Nuclear Research, Dubna

    2016-01-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  17. mr. A C++ library for the matching and running of the Standard Model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kniehl, Bernd A.; Veretin, Oleg L. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Pikelner, Andrey F. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Joint Institute for Nuclear Research, Dubna (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2016-01-15

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library.

  18. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  19. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  20. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  1. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  2. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    Science.gov (United States)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  3. Stroke Lesions in a Large Upper Limb Rehabilitation Trial Cohort Rarely Match Lesions in Common Preclinical Models

    Science.gov (United States)

    Edwardson, Matthew A.; Wang, Ximing; Liu, Brent; Ding, Li; Lane, Christianne J.; Park, Caron; Nelsen, Monica A.; Jones, Theresa A; Wolf, Steven L; Winstein, Carolee J; Dromerick, Alexander W.

    2017-01-01

    Background Stroke patients with mild-moderate upper extremity (UE) motor impairments and minimal sensory and cognitive deficits provide a useful model to study recovery and improve rehabilitation. Laboratory-based investigators use lesioning techniques for similar goals. Objective Determine whether stroke lesions in an UE rehabilitation trial cohort match lesions from the preclinical stroke recovery models used to drive translational research. Methods Clinical neuroimages from 297 participants enrolled in the Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) study were reviewed. Images were characterized based on lesion type (ischemic or hemorrhagic), volume, vascular territory, depth (cortical gray matter, cortical white matter, subcortical), old strokes, and leukoaraiosis. Lesions were compared with those of preclinical stroke models commonly used to study upper limb recovery. Results Among the ischemic stroke participants, median infarct volume was 1.8 mL, with most lesions confined to subcortical structures (61%) including the anterior choroidal artery territory (30%) and the pons (23%). Of ICARE participants, stroke patients, but they represent a clinically and scientifically important subgroup. Compared to lesions in general stroke populations and widely-studied animal models of recovery, ICARE participants had smaller, more subcortically-based strokes. Improved preclinical-clinical translational efforts may require better alignment of lesions between preclinical and human stroke recovery models. PMID:28337932

  4. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  5. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  6. Graph configuration model based evaluation of the education-occupation match.

    Science.gov (United States)

    Gadar, Laszlo; Abonyi, Janos

    2018-01-01

    To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.

  7. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  8. Matching theory

    CERN Document Server

    Plummer, MD

    1986-01-01

    This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.

  9. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  10. Automated side-chain model building and sequence assignment by template matching

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2002-01-01

    A method for automated macromolecular side-chain model building and for aligning the sequence to the map is described. An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer

  11. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  12. Automated side-chain model building and sequence assignment by template matching.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  13. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  14. Improvement of temporal and dynamic subtraction images on abdominal CT using 3D global image matching and nonlinear image warping techniques

    International Nuclear Information System (INIS)

    Okumura, E; Sanada, S; Suzuki, M; Takemura, A; Matsui, O

    2007-01-01

    Accurate registration of the corresponding non-enhanced and arterial-phase CT images is necessary to create temporal and dynamic subtraction images for the enhancement of subtle abnormalities. However, respiratory movement causes misregistration at the periphery of the liver. To reduce these misregistration errors, we developed a temporal and dynamic subtraction technique to enhance small HCC by 3D global matching and nonlinear image warping techniques. The study population consisted of 21 patients with HCC. Using the 3D global matching and nonlinear image warping technique, we registered current and previous arterial-phase CT images or current non-enhanced and arterial-phase CT images obtained in the same position. The temporal subtraction image was obtained by subtracting the previous arterial-phase CT image from the warped current arterial-phase CT image. The dynamic subtraction image was obtained by the subtraction of the current non-enhanced CT image from the warped current arterial-phase CT image. The percentage of fair or superior temporal subtraction images increased from 52.4% to 95.2% using the new technique, while on the dynamic subtraction images, the percentage increased from 66.6% to 95.2%. The new subtraction technique may facilitate the diagnosis of subtle HCC based on the superior ability of these subtraction images to show nodular and/or ring enhancement

  15. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  16. Sulphur simulations for East Asia using the MATCH model with meteorological data from ECMWF

    Energy Technology Data Exchange (ETDEWEB)

    Engardt, Magnuz

    2000-03-01

    As part of a model intercomparison exercise, with participants from a number of Asian, European and American institutes, sulphur transport and conversion calculations were conducted over an East Asian domain for 2 different months in 1993. All participants used the same emission inventory and simulated concentration and deposition at a number of prescribed geographic locations. The participants were asked to run their respective model both with standard parameters, and with a set of given parameters, in order to examine the different behaviour of the models. The study included comparison with measured data and model-to-model intercomparisons, notably source-receptor relationships. We hereby describe the MATCH model, used in the study, and report some typical results. We find that although the standard and the prescribed set of model parameters differed significantly in terms of sulphur conversion and wet scavenging rate, the resulting change in atmospheric concentrations and surface depositions only change marginally. We show that it is often more critical to choose a representative gridbox value than selecting a parameter from the suite available. The modelled, near-surface, atmospheric concentration of sulphur in eastern China is typically 5- 10 {mu}g S m{sup -3}, with large areas exceeding 20 {mu}g S m{sup -3}. In southern Japan the values range from 2-5 {mu}g S m{sup -3} . Atmospheric SO{sub 2} dominates over sulphate near the emission regions while sulphate concentrations are higher over e.g. the western Pacific. The sulphur deposition exceeds several g sulphur m{sup -2} year{sup -1} in large areas of China. Southern Japan receives 03-1 g S m{sup -2} year{sup -1}. In January, the total wet deposition roughly equals the dry deposition, in May - when it rains more in the domain - total wet deposition is ca. 50% larger than total dry deposition.

  17. Automated main-chain model building by template matching and iterative fragment extension

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition

  18. Comparison of self-written waveguide techniques and bulk index matching for low-loss polymer waveguide interconnects

    Science.gov (United States)

    Burrell, Derek; Middlebrook, Christopher

    2016-03-01

    Polymer waveguides (PWGs) are used within photonic interconnects as inexpensive and versatile substitutes for traditional optical fibers. The PWGs are typically aligned to silica-based optical fibers for coupling. An epoxide elastomer is then applied and cured at the interface for index matching and rigid attachment. Self-written waveguides (SWWs) are proposed as an alternative to further reduce connection insertion loss (IL) and alleviate marginal misalignment issues. Elastomer material is deposited after the initial alignment, and SWWs are formed by injecting ultraviolet (UV) light into the fiber or waveguide. The coupled UV light cures a channel between the two differing structures. A suitable cladding layer can be applied after development. Such factors as longitudinal gap distance, UV cure time, input power level, polymer material selection and choice of solvent affect the resulting SWWs. Experimental data are compared between purely index-matched samples and those with SWWs at the fiber-PWG interface. It is shown that writing process. Successfully fabricated SWWs reduce overall processing time and enable an effectively continuous low-loss rigid interconnect.

  19. Automated main-chain model building by template matching and iterative fragment extension.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  20. Spatially Explicit Modeling Reveals Cephalopod Distributions Match Contrasting Trophic Pathways in the Western Mediterranean Sea.

    Directory of Open Access Journals (Sweden)

    Patricia Puerta

    Full Text Available Populations of the same species can experience different responses to the environment throughout their distributional range as a result of spatial and temporal heterogeneity in habitat conditions. This highlights the importance of understanding the processes governing species distribution at local scales. However, research on species distribution often averages environmental covariates across large geographic areas, missing variability in population-environment interactions within geographically distinct regions. We used spatially explicit models to identify interactions between species and environmental, including chlorophyll a (Chla and sea surface temperature (SST, and trophic (prey density conditions, along with processes governing the distribution of two cephalopods with contrasting life-histories (octopus and squid across the western Mediterranean Sea. This approach is relevant for cephalopods, since their population dynamics are especially sensitive to variations in habitat conditions and rarely stable in abundance and location. The regional distributions of the two cephalopod species matched two different trophic pathways present in the western Mediterranean Sea, associated with the Gulf of Lion upwelling and the Ebro river discharges respectively. The effects of the studied environmental and trophic conditions were spatially variant in both species, with usually stronger effects along their distributional boundaries. We identify areas where prey availability limited the abundance of cephalopod populations as well as contrasting effects of temperature in the warmest regions. Despite distributional patterns matching productive areas, a general negative effect of Chla on cephalopod densities suggests that competition pressure is common in the study area. Additionally, results highlight the importance of trophic interactions, beyond other common environmental factors, in shaping the distribution of cephalopod populations. Our study presents

  1. A knowledge based approach to matching human neurodegenerative disease and animal models

    Directory of Open Access Journals (Sweden)

    Maryann E Martone

    2013-05-01

    Full Text Available Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology and an associated Phenotype Knowledge Base using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework and qualities are drawn from the Phenotype and Trait Ontology. We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  2. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  3. Elastic Minutiae Matching by Means of Thin-Plate Spline Models

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2002-01-01

    This paper presents a novel minutiae matching method that deals with elastic distortions by normalizing the shape of the test fingerprint with respect to the template. The method first determines possible matching minutiae pairs by means of comparing local neighborhoods of the minutiae. Next a

  4. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    Science.gov (United States)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  5. Detection, modeling and matching of pleural thickenings from CT data towards an early diagnosis of malignant pleural mesothelioma

    Science.gov (United States)

    Chaisaowong, Kraisorn; Kraus, Thomas

    2014-03-01

    Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.

  6. Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis

    Science.gov (United States)

    Jacobson, Seth A.; Marzari, Francesco; Rossi, Alessandro; Scheeres, Daniel J.

    2016-10-01

    From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis is consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. These conclusions rest on the asteroid rotation model of Marzari et al. ([2011]Icarus, 214, 622-631), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis, described in detail within, and the binary evolution model of Jacobson et al. ([2011a] Icarus, 214, 161-178) and Jacobson et al. ([2011b] The Astrophysical Journal Letters, 736, L19). Our complete asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the contact binary fraction. We find that in order for the model to best match observations, rotational fission produces high mass ratio (> 0.2) binary components with four to eight times the frequency as low mass ratio (<0.2) components, where the mass ratio is the mass of the secondary component divided by the mass of the primary component. This is consistent with post-rotational fission binary system mass ratio being drawn from either a flat or a positive and shallow distribution, since the high mass ratio bin is four times the size of the low mass ratio bin; this is in contrast to the observed steady-state binary mass ratio, which has a negative and steep distribution. This can be understood in the context of the BYORP-tidal equilibrium hypothesis, which predicts that low mass ratio binaries survive for a significantly

  7. History matching of transient pressure build-up in a simulation model using adjoint method

    Energy Technology Data Exchange (ETDEWEB)

    Ajala, I.; Haekal, Rachmat; Ganzer, L. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany); Almuallim, H. [Firmsoft Technologies, Inc., Calgary, AB (Canada); Schulze-Riegert, R. [SPT Group GmbH, Hamburg (Germany)

    2013-08-01

    The aim of this work is the efficient and computer-assisted history-matching of pressure build-up and pressure derivatives by small modification to reservoir rock properties on a grid by grid level. (orig.)

  8. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  9. Stability Analysis of Positive Polynomial Fuzzy-Model-Based Control Systems with Time Delay under Imperfect Premise Matching

    OpenAIRE

    Li, Xiaomiao; Lam, Hak Keung; Song, Ge; Liu, Fucai

    2017-01-01

    This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based ({PFMB}) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. A sum-of-squares (SOS)-based stability ana...

  10. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  11. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  12. Early outcome in renal transplantation from large donors to small and size-matched recipients - a porcine experimental model

    DEFF Research Database (Denmark)

    Ravlo, Kristian; Chhoden, Tashi; Søndergaard, Peter

    2012-01-01

    in small recipients within 60 min after reperfusion. Interestingly, this was associated with a significant reduction in medullary RPP, while there was no significant change in the size-matched recipients. No difference was observed in urinary NGAL excretion between the groups. A significant higher level......Kidney transplantation from a large donor to a small recipient, as in pediatric transplantation, is associated with an increased risk of thrombosis and DGF. We established a porcine model for renal transplantation from an adult donor to a small or size-matched recipient with a high risk of DGF...... and studied GFR, RPP using MRI, and markers of kidney injury within 10 h after transplantation. After induction of BD, kidneys were removed from ∼63-kg donors and kept in cold storage for ∼22 h until transplanted into small (∼15 kg, n = 8) or size-matched (n = 8) recipients. A reduction in GFR was observed...

  13. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  14. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  15. Too Much Matching: A Social Relations Model Enhancement of the Pairing Game

    Science.gov (United States)

    Eastwick, Paul W.; Buck, April A.

    2014-01-01

    The Pairing Game is a popular classroom demonstration that illustrates how people select romantic partners who approximate their own desirability. However, this game produces matching correlations that greatly exceed the correlations that characterize actual romantic pairings, perhaps because the game does not incorporate the social relations…

  16. BIOMEHANICAL MODEL OF THE GOLF SWING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2011-08-01

    Full Text Available Golf is an extremely complex game which depends on a number of interconnected factors. One of the most important elements is undoubtedly the golf swing technique. High performance of the golf swing technique is generated by: the level of motor abilities, high degree of movement control, the level of movement structure stabilisation, morphological characteristics, inter- and intro-muscular coordination, motivation, and concentration. The golf swing technique was investigated using the biomechanical analysis method. Kinematic parameters were registered using two synchronised high-speed cameras at a frequency of 2,000 Hz. The sample of subjects consisted of three professional golf players. The study results showed a relatively high variability of the swing technique. The maximum velocity of the ball after a wood swing ranged from 233 to 227 km/h. The velocity of the ball after an iron swing was lower by 10 km/h on average. The elevation angle of the ball ranged from 11.7 to 15.3 degrees. In the final phase of the golf swing, i.e. downswing, the trunk rotators play the key role.

  17. Optimization of technique factors for a silicon diode array full-field digital mammography system and comparison to screen-film mammography with matched average glandular dose

    International Nuclear Information System (INIS)

    Berns, Eric A.; Hendrick, R. Edward; Cutter, Gary R.

    2003-01-01

    Contrast-detail experiments were performed to optimize technique factors for the detection of low-contrast lesions using a silicon diode array full-field digital mammography (FFDM) system under the conditions of a matched average glandular dose (AGD) for different techniques. Optimization was performed for compressed breast thickness from 2 to 8 cm. FFDM results were compared to screen-film mammography (SFM) at each breast thickness. Four contrast-detail (CD) images were acquired on a SFM unit with optimal techniques at 2, 4, 6, and 8 cm breast thicknesses. The AGD for each breast thickness was calculated based on half-value layer (HVL) and entrance exposure measurements on the SFM unit. A computer algorithm was developed and used to determine FFDM beam current (mAs) that matched AGD between FFDM and SFM at each thickness, while varying target, filter, and peak kilovoltage (kVp) across the full range available for the FFDM unit. CD images were then acquired on FFDM for kVp values from 23-35 for a molybdenum-molybdenum (Mo-Mo), 23-40 for a molybdenum-rhodium (Mo-Rh), and 25-49 for a rhodium-rhodium (Rh-Rh) target-filter under the constraint of matching the AGD from screen-film for each breast thickness (2, 4, 6, and 8 cm). CD images were scored independently for SFM and each FFDM technique by six readers. CD scores were analyzed to assess trends as a function of target-filter and kVp and were compared to SFM at each breast thickness. For 2 cm thick breasts, optimal FFDM CD scores occurred at the lowest possible kVp setting for each target-filter, with significant decreases in FFDM CD scores as kVp was increased under the constraint of matched AGD. For 2 cm breasts, optimal FFDM CD scores were not significantly different from SFM CD scores. For 4-8 cm breasts, optimum FFDM CD scores were superior to SFM CD scores. For 4 cm breasts, FFDM CD scores decreased as kVp increased for each target-filter combination. For 6 cm breasts, CD scores decreased slightly as k

  18. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  19. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    -registration, depth correction, etc.) each geophysical profile was evaluated by matching the core data. Applying traditional geophysical techniques, the best profiles were inverted using the core data creating two-dimensional (2-D) stratigraphic regolith models for each transect, and evaluated using independent validation. Next, in a test of an alternative method borrowed from digital soil mapping, the best preprocessed geophysical profiles were co-registered and stratigraphic models for each property created using multivariate environmental correlation. After independent validation, the qualities of the latest models were compared to the traditionally derived 2-D inverted models. Finally, the best overall stratigraphic models were used in conjunction with local environmental data (e.g. geology, geochemistry, terrain, soils) to create conceptual regolith hillslope models for each transect highlighting important features and processes, e.g. morphology, hydropedology and weathering characteristics. Results are presented with recommendations regarding the use of geophysics in modelling regolith stratigraphy at fine scales.

  20. Tackle technique and tackle-related injuries in high-level South African Rugby Union under-18 players: real-match video analysis.

    Science.gov (United States)

    Burger, Nicholas; Lambert, Michael I; Viljoen, Wayne; Brown, James C; Readhead, Clint; Hendricks, Sharief

    2016-08-01

    The high injury rate associated with rugby union is primarily due to the tackle, and poor contact technique has been identified as a risk factor for injury. We aimed to determine whether the tackle technique proficiency scores were different in injurious tackles versus tackles that did not result in injury using real-match scenarios in high-level youth rugby union. Injury surveillance was conducted at the under-18 Craven Week tournaments (2011-2013). Tackle-related injury information was used to identify injury events in the match video footage and non-injury events were identified for the injured player cohort. Injury and non-injury events were scored for technique proficiency and Cohen's effect sizes were calculated and the Student t test (p<0.05) was performed to compare injury versus non-injury scores. The overall mean score for front-on ball-carrier proficiency was 7.17±1.90 and 9.02±2.15 for injury and non-injury tackle events, respectively (effect size=moderate; p<0.05). The overall mean score for side/behind ball-carrier proficiency was 4.09±2.12 and 7.68±1.72 for injury and non-injury tackle events, respectively (effect size=large; p<0.01). The overall mean score for front-on tackler proficiency was 7.00±1.95 and 9.35±2.56 for injury and non-injury tackle events, respectively (effect size=moderate; p<0.05). The overall mean score for side/behind tackler proficiency was 5.47±1.60 and 8.14±1.75 for injury and non-injury tackle events, respectively (effect size=large; p<0.01). Higher overall mean and criterion-specific tackle-related technique scores were associated with a non-injury outcome. The ability to perform well during tackle events may decrease the risk of injury and may manifest in superior performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  2. Matching achievement contexts with implicit theories to maximize motivation after failure: a congruence model.

    Science.gov (United States)

    El-Alayli, Amani

    2006-12-01

    Previous research has shown that matching person variables with achievement contexts can produce the best motivational outcomes. The current study examines whether this is also true when matching entity and incremental beliefs with the appropriate motivational climate. Participants were led to believe that a personal attribute was fixed (entity belief) or malleable (incremental belief). After thinking that they failed a test that assessed the attribute, participants performed a second (related) task in a context that facilitated the pursuit of either performance or learning goals. Participants were expected to exhibit greater effort on the second task in the congruent conditions (entity belief plus performance goal climate and incremental belief plus learning goal climate) than in the incongruent conditions. These results were obtained, but only for participants who either valued competence on the attribute or had high achievement motivation. Results are discussed in terms of developing strategies for optimizing motivation in achievement settings.

  3. A method for matching the refractive index and kinematic viscosity of a blood analog for flow visualization in hydraulic cardiovascular models.

    Science.gov (United States)

    Nguyen, T T; Biadillah, Y; Mongrain, R; Brunette, J; Tardif, J C; Bertrand, O F

    2004-08-01

    In this work, we propose a simple method to simultaneously match the refractive index and kinematic viscosity of a circulating blood analog in hydraulic models for optical flow measurement techniques (PIV, PMFV, LDA, and LIF). The method is based on the determination of the volumetric proportions and temperature at which two transparent miscible liquids should be mixed to reproduce the targeted fluid characteristics. The temperature dependence models are a linear relation for the refractive index and an Arrhenius relation for the dynamic viscosity of each liquid. Then the dynamic viscosity of the mixture is represented with a Grunberg-Nissan model of type 1. Experimental tests for acrylic and blood viscosity were found to be in very good agreement with the targeted values (measured refractive index of 1.486 and kinematic viscosity of 3.454 milli-m2/s with targeted values of 1.47 and 3.300 milli-m2/s).

  4. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  5. Dynamic model reduction: An overview of available techniques with application to power systems

    Directory of Open Access Journals (Sweden)

    Đukić Savo D.

    2012-01-01

    Full Text Available This paper summarises the model reduction techniques used for the reduction of large-scale linear and nonlinear dynamic models, described by the differential and algebraic equations that are commonly used in control theory. The groups of methods discussed in this paper for reduction of the linear dynamic model are based on singular perturbation analysis, modal analysis, singular value decomposition, moment matching and methods based on a combination of singular value decomposition and moment matching. Among the nonlinear dynamic model reduction methods, proper orthogonal decomposition, the trajectory piecewise linear method, balancing-based methods, reduction by optimising system matrices and projection from a linearised model, are described. Part of the paper is devoted to the techniques commonly used for reduction (equivalencing of large-scale power systems, which are based on coherency, synchrony, singular perturbation analysis, modal analysis and identification. Two (most interesting of the described techniques are applied to the reduction of the commonly used New England 10-generator, 39-bus test power system.

  6. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    Science.gov (United States)

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  7. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  8. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  9. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  10. Application of Convolution Perfectly Matched Layer in MRTD scattering model for non-spherical aerosol particles and its performance analysis

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Jiang, Zidong; Liu, Lei; Chen, Ming

    2017-10-01

    The performance of absorbing boundary condition (ABC) is an important factor influencing the simulation accuracy of MRTD (Multi-Resolution Time-Domain) scattering model for non-spherical aerosol particles. To this end, the Convolution Perfectly Matched Layer (CPML), an excellent ABC in FDTD scheme, is generalized and applied to the MRTD scattering model developed by our team. In this model, the time domain is discretized by exponential differential scheme, and the discretization of space domain is implemented by Galerkin principle. To evaluate the performance of CPML, its simulation results are compared with those of BPML (Berenger's Perfectly Matched Layer) and ADE-PML (Perfectly Matched Layer with Auxiliary Differential Equation) for spherical and non-spherical particles, and their simulation errors are analyzed as well. The simulation results show that, for scattering phase matrices, the performance of CPML is better than that of BPML; the computational accuracy of CPML is comparable to that of ADE-PML on the whole, but at scattering angles where phase matrix elements fluctuate sharply, the performance of CPML is slightly better than that of ADE-PML. After orientation averaging process, the differences among the results of different ABCs are reduced to some extent. It also can be found that ABCs have a much weaker influence on integral scattering parameters (such as extinction and absorption efficiencies) than scattering phase matrices, this phenomenon can be explained by the error averaging process in the numerical volume integration.

  11. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources.

    Science.gov (United States)

    Sudhir, Dass; Bandyopadhyay, M; Chakraborty, A

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  12. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  13. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  14. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction

    Science.gov (United States)

    Iwaszczuk, Dorota; Stilla, Uwe

    2017-10-01

    Thermal infrared (TIR) images are often used to picture damaged and weak spots in the insulation of the building hull, which is widely used in thermal inspections of buildings. Such inspection in large-scale areas can be carried out by combining TIR imagery and 3D building models. This combination can be achieved via texture mapping. Automation of texture mapping avoids time consuming imaging and manually analyzing each face independently. It also provides a spatial reference for façade structures extracted in the thermal textures. In order to capture all faces, including the roofs, façades, and façades in the inner courtyard, an oblique looking camera mounted on a flying platform is used. Direct geo-referencing is usually not sufficient for precise texture extraction. In addition, 3D building models have also uncertain geometry. In this paper, therefore, methodology for co-registration of uncertain 3D building models with airborne oblique view images is presented. For this purpose, a line-based model-to-image matching is developed, in which the uncertainties of the 3D building model, as well as of the image features are considered. Matched linear features are used for the refinement of the exterior orientation parameters of the camera in order to ensure optimal co-registration. Moreover, this study investigates whether line tracking through the image sequence supports the matching. The accuracy of the extraction and the quality of the textures are assessed. For this purpose, appropriate quality measures are developed. The tests showed good results on co-registration, particularly in cases where tracking between the neighboring frames had been applied.

  15. Development of a computerized method for identifying the posteroanterior and lateral views of chest radiographs by use of a template matching technique

    International Nuclear Information System (INIS)

    Arimura, Hidetaka; Katsuragawa, Shigehiko; Li Qiang; Ishida, Takayuki; Doi, Kunio

    2002-01-01

    In picture archiving and communications systems (PACS) or digital archiving systems, the information on the posteroanterior (PA) and lateral views for chest radiographs is often not recorded or is recorded incorrectly. However, it is necessary to identify the PA or lateral view correctly and automatically for quantitative analysis of chest images for computer-aided diagnosis. Our purpose in this study was to develop a computerized method for correctly identifying either PA or lateral views of chest radiographs. Our approach is to examine the similarity of a chest image with templates that represent the average chest images of the PA or lateral view for various types of patients. By use of a template matching technique with nine template images for patients of different size in two steps, correlation values were obtained for determining whether a chest image is either a PA or a lateral view. The templates for PA and lateral views were prepared from 447 PA and 200 lateral chest images. For a validation test, this scheme was applied to 1,000 test images consisting of 500 PA and 500 lateral chest radiographs, which are different from training cases. In the first step, 924 (92.4%) of the cases were correctly identified by comparison of the correlation values obtained with the three templates for medium-size patients. In the second step, the correlation values with the six templates for small and large patients were compared, and all of the remaining unidentifiable cases were identified correctly

  16. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  17. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  18. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  19. Matching Organs

    Science.gov (United States)

    ... to know FAQ Living donation What is living donation? Organs Types Being a living donor First steps Being ... brochures What Every Patient Needs to Know Living Donation Multiple Listing Visit UNOS Store Learn more How organs are matched How to become a living donor ...

  20. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Matsunobu, Y; Shiotsuki, K [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka (Japan); Morishita, J [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, JP (Japan)

    2015-06-15

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body.

  1. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    International Nuclear Information System (INIS)

    Matsunobu, Y; Shiotsuki, K; Morishita, J

    2015-01-01

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body

  2. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  3. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  4. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  5. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    DEFF Research Database (Denmark)

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan

    2002-01-01

    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...

  6. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  7. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    International Nuclear Information System (INIS)

    Sardanyes, Josep; Sole, Ricard V.

    2008-01-01

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations

  8. MODEL PEMBELAJARAN MAKE A MATCH DAN PENGARUNHYA TERHADAP HASIL BELAJAR EKONOMI DI SMAN 14 PADANG

    Directory of Open Access Journals (Sweden)

    sri wahyuni

    2016-10-01

    Full Text Available Penelitian ini bertujuan untuk melihat pengaruh model pembelajaran Make amatch terhadap hasil belajar ekonomi kelas X SMAN 14 Padang. Jenis penelitian ini adalah penelitian tindakan kelas (PTK, dimana dalam suatu kelas diberikan tindakan (action. Dalam satu kelas diberikan tindakan untuk memperbaiki suatu keadaan dimana dalam proses pembelajaran dilihat masih rendahnya aktivitas siswa dalam pembelajaran dan menyebabkan hasil Belajar rendah. Lokasi Tempat penelitian ini adalah SMAN 14 Padang Jalan Indarung Karang Putih. Dimana Yang menjadi objek Penelitian ini adalah kelas X1. Rancangan penelitian yang digunakan adalah penelitian spiral, satu putaran terdiri dari langkah-langkah sebagai berikut: Perencanaan (planning, yakni persiapan yang dilakukan untuk pelaksanaan PTK. Berdasarkan dari data diatas maka model pembelajaran make AMatch dapat meningkatkan hasil Belajar ekonomi siswa di SMAN 14 Padang. Untuk itu dapat ditarik kesimpulan sebagai berikut: Hasil Belajar siswa mengalami peningkatan dari siklus satu, siklus dua dan siklus tiga. Dengan adanya penerapan model pembelajaran Make Amatch ini adanya pengaruh yang siknifikan antara model pembelajaran make Amatch terhadap hasil belajar ekonomi siswa kelas X1 SMA 14 Padang. Dengan Demikian Model pembelajaran ini bisa digunakan untuk proses pembelajaran ekonomi berikutnya.

  9. Alternative Payment Models Should Risk-Adjust for Conversion Total Hip Arthroplasty: A Propensity Score-Matched Study.

    Science.gov (United States)

    McLawhorn, Alexander S; Schairer, William W; Schwarzkopf, Ran; Halsey, David A; Iorio, Richard; Padgett, Douglas E

    2017-12-06

    For Medicare beneficiaries, hospital reimbursement for nonrevision hip arthroplasty is anchored to either diagnosis-related group code 469 or 470. Under alternative payment models, reimbursement for care episodes is not further risk-adjusted. This study's purpose was to compare outcomes of primary total hip arthroplasty (THA) vs conversion THA to explore the rationale for risk adjustment for conversion procedures. All primary and conversion THAs from 2007 to 2014, excluding acute hip fractures and cancer patients, were identified in the National Surgical Quality Improvement Program database. Conversion and primary THA patients were matched 1:1 using propensity scores, based on preoperative covariates. Multivariable logistic regressions evaluated associations between conversion THA and 30-day outcomes. A total of 2018 conversions were matched to 2018 primaries. There were no differences in preoperative covariates. Conversions had longer operative times (148 vs 95 minutes, P reimbursement models shift toward bundled payment paradigms, conversion THA appears to be a procedure for which risk adjustment is appropriate. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2014-01-01

    Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...

  11. PENGARUH MODEL PEMBELAJARAN KOOPERATIF MAKE A MATCH BERBANTUAN SLIDE SHARE TERHADAP HASIL BELAJAR KOGNITIF IPS DAN KETERAMPILAN SOSIAL

    Directory of Open Access Journals (Sweden)

    Udin Cahya Ari Prastya

    2016-08-01

    Full Text Available This research is conducted due to the problems faced by the fifth graders of Ampelgading 01 Public Elementary School. They find difficulties in understanding social science subject, indicated by students’ learning outcomes. Only 5% students of class pass the Minimum Passing Criteria of 70. Teacher-centered learning decreases the interaction between teachers and students and students with students, which related to the development of social skills such. Therefore, interactive learning model is needed to build good classroom atmosphere and improve students’ interactions. One model of interactive learning is a Make a Match.This research used quantitative and quasi-experiment methods, Quasi-experimental design used is nonequivalent control group design, using independent t-test assisted with SPSS 16 software for data analysis.The research result presents following the treatment in experimental class using cooperative teaching model ‘Make a Match’ using slide share, average grade of posttest obtained from control group is 66,15 while the experimental class gained the average of 75,18; control class obtained social skills scores with the average of 45 and 61 for experimental class. t test result indicates the cognitive learning measured from gain score of pretest and posttest have significant value of 0.000 and social skills shows significant value of 0.000. It is known that 0.000> 0.05, indicates that is related to the effect of cooperative teaching model ‘Make a Match’ using slide share to the social science cognitive and social skill. Pelaksanaan penelitian ini dikarenakan adanya masalah yang dihadapi oleh siswa kelas V di SDN Ampelgading 01. Meraka merasa kesulitas dalam memahami materi mata pelajaran IPS, hal ini dibuktikan dengan nilai hasil belajar siswa yang mendapatkan nilai di atas KKM dengan nilai KKM 70 hanya 5% dari jumlah total keseluruan siswa. Pembelajaran guru yang bersifat aksi menimbulkan tidak adanya interaksi antara

  12. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  13. Addressing diverse learner preferences and intelligences with emerging technologies: Matching models to online opportunities

    Directory of Open Access Journals (Sweden)

    Ke Zhang

    2009-03-01

    Full Text Available This paper critically reviews various learning preferences and human intelligence theories and models with a particular focus on the implications for online learning. It highlights a few key models, Gardner’s multiple intelligences, Fleming and Mills’ VARK model, Honey and Mumford’s Learning Styles, and Kolb’s Experiential Learning Model, and attempts to link them to trends and opportunities in online learning with emerging technologies. By intersecting such models with online technologies, it offers instructors and instructional designers across educational sectors and situations new ways to think about addressing diverse learner needs, backgrounds, and expectations. Learning technologies are important for effective teaching, as are theories and models and theories of learning. We argue that more immense power can be derived from connections between the theories, models and learning technologies. Résumé : Cet article passe en revue de manière critique les divers modèles et théories sur les préférences d’apprentissage et l’intelligence humaine, avec un accent particulier sur les implications qui en découlent pour l’apprentissage en ligne. L’article présente quelques-uns des principaux modèles (les intelligences multiples de Gardner, le modèle VAK de Fleming et Mills, les styles d’apprentissage de Honey et Mumford et le modèle d’apprentissage expérientiel de Kolb et tente de les relier à des tendances et occasions d’apprentissage en ligne qui utilisent les nouvelles technologies. En croisant ces modèles avec les technologies Web, les instructeurs et concepteurs pédagogiques dans les secteurs de l’éducation ou en situation éducationnelle se voient offrir de nouvelles façons de tenir compte des divers besoins, horizons et attentes des apprenants. Les technologies d’apprentissage sont importantes pour un enseignement efficace, tout comme les théories et les modèles d’apprentissage. Nous sommes d

  14. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  15. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  16. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  17. Do projections from bioclimatic envelope models and climate change metrics match?

    DEFF Research Database (Denmark)

    Garcia, Raquel A.; Cabeza, Mar; Altwegg, Res

    2016-01-01

    as indicators of the exposure of species to climate change. Here, we investigate whether these two approaches provide qualitatively similar indications about where biodiversity is potentially most exposed to climate change. Location: Sub-Saharan Africa. Methods: We compared a range of climate change metrics...... for sub-Saharan Africa with ensembles of bioclimatic envelope models for 2723 species of amphibians, snakes, mammals and birds. For each taxonomic group, we performed three comparisons between the two approaches: (1) is projected change in local climatic suitability (models) greater in grid cells...... between the two approaches was found for all taxonomic groups, although it was stronger for species with a narrower climatic envelope breadth. Main conclusions: For sub-Saharan African vertebrates, projected patterns of exposure to climate change given by climate change metrics alone were qualitatively...

  18. The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies

    DEFF Research Database (Denmark)

    Scheike, Thomas; Martinussen, T; Zhang, MJ

    2011-01-01

    leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM......) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....

  19. D Model of AL Zubarah Fortress in Qatar - Terrestrial Laser Scanning VS. Dense Image Matching

    Science.gov (United States)

    Kersten, T.; Mechelke, K.; Maziull, L.

    2015-02-01

    In September 2011 the fortress Al Zubarah, built in 1938 as a typical Arabic fortress and restored in 1987 as a museum, was recorded by the HafenCity University Hamburg using terrestrial laser scanning with the IMAGER 5006h and digital photogrammetry for the Qatar Museum Authority within the framework of the Qatar Islamic Archaeology and Heritage Project. One goal of the object recording was to provide detailed 2D/3D documentation of the fortress. This was used to complete specific detailed restoration work in the recent years. From the registered laser scanning point clouds several cuttings and 2D plans were generated as well as a 3D surface model by triangle meshing. Additionally, point clouds and surface models were automatically generated from digital imagery from a Nikon D70 using the open-source software Bundler/PMVS2, free software VisualSFM, Autodesk Web Service 123D Catch beta, and low-cost software Agisoft PhotoScan. These outputs were compared with the results from terrestrial laser scanning. The point clouds and surface models derived from imagery could not achieve the same quality of geometrical accuracy as laser scanning (i.e. 1-2 cm).

  20. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  1. When growth and photosynthesis don't match: implications for carbon balance models

    Science.gov (United States)

    Medlyn, B.; Mahmud, K.; Duursma, R.; Pfautsch, S.; Campany, C.

    2017-12-01

    Most models of terrestrial plant growth are based on the principle of carbon balance: that growth can be predicted from net uptake of carbon via photosynthesis. A key criticism leveled at these models by plant physiologists is that there are many circumstances in which plant growth appears to be independent of photosynthesis: for example, during the onset of drought, or with rising atmospheric CO2 concentration. A crucial problem for terrestrial carbon cycle models is to develop better representations of plant carbon balance when there is a mismatch between growth and photosynthesis. Here we present two studies providing insight into this mismatch. In the first, effects of root restriction on plant growth were examined by comparing Eucalyptus tereticornis seedlings growing in containers of varying sizes with freely-rooted seedlings. Root restriction caused a reduction in photosynthesis, but this reduction was insufficient to explain the even larger reduction observed in growth. We applied data assimilation to a simple carbon balance model to quantify the response of carbon balance as a whole in this experiment. We inferred that, in addition to photosynthesis, there are significant effects of root restriction on growth respiration, carbon allocation, and carbohydrate utilization. The second study was carried out at the EucFACE Free-Air CO2 Enrichment experiment. At this experiment, photosynthesis of the overstorey trees is increased with enriched CO2, but there is no significant effect on above-ground productivity. These mature trees have reached their maximum height but are at significant risk of canopy loss through disturbance, and we hypothesized that additional carbon taken up through photosynthesis is preferentially allocated to storage rather than growth. We tested this hypothesis by measuring stemwood non-structural carbohydrates (NSC) during a psyllid outbreak that completely defoliated the canopy in 2015. There was a significant drawdown of NSC during

  2. Model and Simulation of a Tunable Birefringent Fiber Using Capillaries Filled with Liquid Ethanol for Magnetic Quasiphase Matching In-Fiber Isolator

    Directory of Open Access Journals (Sweden)

    Clint Zeringue

    2010-01-01

    Full Text Available A technique to tune a magnetic quasi-phase matching in-fiber isolator through the application of stress induced by two mutually orthogonal capillary tubes filled with liquid ethanol is investigated numerically. The results show that it is possible to “tune” the birefringence in these fibers over a limited range depending on the temperature at which the ethanol is loaded into the capillaries. Over this tuning range, the thermal sensitivity of the birefringence is an order-of-magnitude lower than conventional fibers, making this technique well suited for magnetic quasi-phase matching.

  3. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  4. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  5. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  6. A Two-Phase Model for Trade Matching and Price Setting in Double Auction Water Markets

    Science.gov (United States)

    Xu, Tingting; Zheng, Hang; Zhao, Jianshi; Liu, Yicheng; Tang, Pingzhong; Yang, Y. C. Ethan; Wang, Zhongjing

    2018-04-01

    Delivery in water markets is generally operated by agencies through channel systems, which imposes physical and institutional market constraints. Many water markets allow water users to post selling and buying requests on a board. However, water users may not be able to choose efficiently when the information (including the constraints) becomes complex. This study proposes an innovative two-phase model to address this problem based on practical experience in China. The first phase seeks and determines the optimal assignment that maximizes the incremental improvement of the system's social welfare according to the bids and asks in the water market. The second phase sets appropriate prices under constraints. Applying this model to China's Xiying Irrigation District shows that it can improve social welfare more than the current "pool exchange" method can. Within the second phase, we evaluate three objective functions (minimum variance, threshold-based balance, and two-sided balance), which represent different managerial goals. The threshold-based balance function should be preferred by most users, while the two-sided balance should be preferred by players who post extreme prices.

  7. Producer-decomposer matching in a simple model ecosystem: A network coevolutionary approach to ecosystem organization

    International Nuclear Information System (INIS)

    Higashi, Masahiko; Yamamura, Norio; Nakajima, Hisao; Abe, Takuya

    1993-01-01

    The present not is concerned with how the ecosystem maintains its energy and matter processes, and how those processes change throughout ecological and geological time, or how the constituent biota of an ecosystem maintain their life, and how ecological (species) succession and biological evolution proceed within an ecosystem. To advance further Tansky's (1976) approach to ecosystem organization, which investigated the characteristic properties of the developmental process of a model ecosystem, by applying Margalef's (1968) maximum maturity principle to derive its long term change, we seek a course for deriving the macroscopic trends along the organization process of an ecosystem as a consequence of the interactions among its biotic components and their modification of ecological traits. Using a simple ecosystem model consisting of four aggregated components (open-quotes compartmentsclose quotes) connected by nutrient flows, we investigate how a change in the value of a parameter alters the network pattern of flows and stocks, even causing a change in the value of another parameter, which in turn brings about further change in the network pattern and values of some (possible original) parameters. The continuation of this chain reaction involving feedbacks constitutes a possible mechanism for the open-quotes coevolutionclose quotes or open-quotes matchingclose quotes among flows, stocks, and parameters

  8. Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis

    Directory of Open Access Journals (Sweden)

    Benn eMacdonald

    2015-11-01

    Full Text Available Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs, is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.

  9. Decision Support Model for User Submission Approval Energy Partners Candidate Using Profile Matching Method and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Moedjiono Moedjiono

    2016-11-01

    Full Text Available In the field of services, customer satisfaction is a very important factor and determine the success of an enterprise. In the field of outsourcing, customer satisfaction indicator is the labor required delivery in a timely manner and has a level of quality in accordance with the terms proposed by the customer. To provide the best talent to customers, team recruitment and selection must perform a series of tests with a variety of methods to match the criteria of office given by the user with the criteria owned candidates and in order to support growth in graduation rates force a partner at the stage of user approval. For this purpose, the authors conducted a study with the method of observation, interviews, document reviews the candidate recruitment process, so as to provide recommendations for candidates with the highest quality delivery to the user at the stage of approval. The author put forward a model of decision support that is supported by the method of profile matching and Analytical Hierarchy Process (AHP in problem solving. The final results of this study can be used to support a decision in order to improve the effectiveness of the delivery of quality candidates, increase customer satisfaction, lower costs and improve gross operational margin of the company.

  10. Three-dimensional analysis of accuracy of patient-matched instrumentation in total knee arthroplasty: Evaluation of intraoperative techniques and postoperative alignment.

    Science.gov (United States)

    Kuwashima, Umito; Mizu-Uchi, Hideki; Okazaki, Ken; Hamai, Satoshi; Akasaki, Yukio; Murakami, Koji; Nakashima, Yasuharu

    2017-11-01

    It is questionable that the accuracies of patient-matched instrumentation (PMI) have been controversial, even though many surgeons follow manufacturers' recommendations. The purpose of this study was to evaluate the accuracy of intraoperative procedures and the postoperative alignment of the femoral side using PMI with 3-dimensional (3D) analysis. Eighteen knees that underwent total knee arthroplasty using MRI-based PMI were assessed. Intraoperative alignment and bone resection errors of the femoral side were evaluated with a CT-based navigation system. A conventional adjustable guide was used to compare cartilage data with that derived by PMI intraoperatively. Postoperative alignment was assessed using a 3D coordinate system with a computer-assisted design software. We also measured the postoperative alignments using conventional alignment guides with the 3D evaluation. Intraoperative coronal alignment with PMI was 90.9° ± 1.6°. Seventeen knees (94.4%) were within 3° of the optimal alignment. Intraoperative rotational alignment of the femoral guide position of PMI was 0.2° ± 1.6°compared with the adjustable guide, with 17 knees (94.4%) differing by 3° or less between the two methods. Maximum differences in coronal and rotation alignment before and after bone cutting were 2.0° and 2.8°, respectively. Postoperative coronal and rotational alignments were 89.4° ± 1.8° and -1.1° ± 1.3°, respectively. In both alignments, 94.4% of cases were within 3° of the optimal value. The PMI group had less outliers than conventional group in rotational alignment (p = 0.018). Our 3D analysis provided evidence that PMI system resulted in reasonably satisfactory alignments both intraoperatively and postoperatively. Surgeons should be aware that certain surgical techniques including bone cutting, and the associated errors may affect postoperative alignment despite accurate PMI positioning. Copyright © 2017 The Japanese Orthopaedic Association. Published by

  11. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    Science.gov (United States)

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  12. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  13. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  14. Frequency domain finite-element and spectral-element acoustic wave modeling using absorbing boundaries and perfectly matched layer

    Science.gov (United States)

    Rahimi Dalkhani, Amin; Javaherian, Abdolrahim; Mahdavi Basir, Hadi

    2018-04-01

    Wave propagation modeling as a vital tool in seismology can be done via several different numerical methods among them are finite-difference, finite-element, and spectral-element methods (FDM, FEM and SEM). Some advanced applications in seismic exploration benefit the frequency domain modeling. Regarding flexibility in complex geological models and dealing with the free surface boundary condition, we studied the frequency domain acoustic wave equation using FEM and SEM. The results demonstrated that the frequency domain FEM and SEM have a good accuracy and numerical efficiency with the second order interpolation polynomials. Furthermore, we developed the second order Clayton and Engquist absorbing boundary condition (CE-ABC2) and compared it with the perfectly matched layer (PML) for the frequency domain FEM and SEM. In spite of PML method, CE-ABC2 does not add any additional computational cost to the modeling except assembling boundary matrices. As a result, considering CE-ABC2 is more efficient than PML for the frequency domain acoustic wave propagation modeling especially when computational cost is high and high-level absorbing performance is unnecessary.

  15. Fast group matching for MR fingerprinting reconstruction.

    Science.gov (United States)

    Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L

    2015-08-01

    MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.

  16. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  17. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  18. Using crosswell data to enhance history matching

    KAUST Repository

    Ravanelli, Fabio M.

    2014-01-01

    One of the most challenging tasks in the oil industry is the production of reliable reservoir forecast models. Due to different sources of uncertainties in the numerical models and inputs, reservoir simulations are often only crude approximations of the reality. This problem is mitigated by conditioning the model with data through data assimilation, a process known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use of time-lapse data and advanced data assimilation techniques. One of the most promising data assimilation techniques employed in the industry is the ensemble Kalman filter (EnKF) because of its ability to deal with non-linear models at reasonable computational cost. In this paper we study the use of crosswell seismic data as an alternative to 4D seismic surveys in areas where it is not possible to re-shoot seismic. A synthetic reservoir model is used in a history matching study designed better estimate porosity and permeability distributions and improve the quality of the model to predict future field performance. This study is divided in three parts: First the use of production data only is evaluated (baseline for benchmark). Second the benefits of using production and 4D seismic data are assessed. Finally, a new conceptual idea is proposed to obtain time-lapse information for history matching. The use of crosswell time-lapse seismic tomography to map velocities in the interwell region is demonstrated as a potential tool to ensure survey reproducibility and low acquisition cost when compared with full scale surface surveys. Our numerical simulations show that the proposed method provides promising history matching results leading to similar estimation error reductions when compared with conventional history matched surface seismic data.

  19. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  20. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  1. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  2. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  3. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...

  4. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  5. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  6. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  7. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  8. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  9. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  10. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  11. Modelled hydraulic redistribution by sunflower (Helianthus annuus L.) matches observed data only after including night-time transpiration.

    Science.gov (United States)

    Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele

    2014-04-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.

  12. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  13. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  14. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  15. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  16. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  17. Effect of Prophylactic Antifungal Protocols on the Prognosis of Liver Transplantation: A Propensity Score Matching and Multistate Model Approach

    Directory of Open Access Journals (Sweden)

    Yi-Chan Chen

    2016-01-01

    Full Text Available Background. Whether routine antifungal prophylaxis decreases posttransplantation fungal infections in patients receiving orthotopic liver transplantation (OLT remains unclear. This study aimed to determine the effectiveness of antifungal prophylaxis for patients receiving OLT. Patients and Methods. This is a retrospective analysis of a database at Chang Gung Memorial Hospital. We have been administering routine antibiotic and prophylactic antifungal regimens to recipients with high model for end-stage liver disease scores (>20 since 2009. After propensity score matching, 402 patients were enrolled. We conducted a multistate model to analyze the cumulative hazards, probability of fungal infections, and risk factors. Results. The cumulative hazards and transition probability of “transplantation to fungal infection” were lower in the prophylaxis group. The incidence rate of fungal infection after OLT decreased from 18.9% to 11.4% (p=0.052; overall mortality improved from 40.8% to 23.4% (p<0.001. In the “transplantation to fungal infection” transition, prophylaxis was significantly associated with reduced hazards for fungal infection (hazard ratio: 0.57, 95% confidence interval: 0.34–0.96, p=0.033. Massive ascites, cadaver transplantation, and older age were significantly associated with higher risks for mortality. Conclusion. Prophylactic antifungal regimens in high-risk recipients might decrease the incidence of posttransplant fungal infections.

  18. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  20. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  1. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  2. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  3. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  4. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  5. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  6. Matching the reaction-diffusion simulation to dynamic [18F]FMISO PET measurements in tumors: extension to a flow-limited oxygen-dependent model.

    Science.gov (United States)

    Shi, Kuangyu; Bayer, Christine; Gaertner, Florian C; Astner, Sabrina T; Wilkens, Jan J; Nüsslin, Fridtjof; Vaupel, Peter; Ziegler, Sibylle I

    2017-02-01

    Positron-emission tomography (PET) with hypoxia specific tracers provides a noninvasive method to assess the tumor oxygenation status. Reaction-diffusion models have advantages in revealing the quantitative relation between in vivo imaging and the tumor microenvironment. However, there is no quantitative comparison of the simulation results with the real PET measurements yet. The lack of experimental support hampers further applications of computational simulation models. This study aims to compare the simulation results with a preclinical [ 18 F]FMISO PET study and to optimize the reaction-diffusion model accordingly. Nude mice with xenografted human squamous cell carcinomas (CAL33) were investigated with a 2 h dynamic [ 18 F]FMISO PET followed by immunofluorescence staining using the hypoxia marker pimonidazole and the endothelium marker CD 31. A large data pool of tumor time-activity curves (TAC) was simulated for each mouse by feeding the arterial input function (AIF) extracted from experiments into the model with different configurations of the tumor microenvironment. A measured TAC was considered to match a simulated TAC when the difference metric was below a certain, noise-dependent threshold. As an extension to the well-established Kelly model, a flow-limited oxygen-dependent (FLOD) model was developed to improve the matching between measurements and simulations. The matching rate between the simulated TACs of the Kelly model and the mouse PET data ranged from 0 to 28.1% (on average 9.8%). By modifying the Kelly model to an FLOD model, the matching rate between the simulation and the PET measurements could be improved to 41.2-84.8% (on average 64.4%). Using a simulation data pool and a matching strategy, we were able to compare the simulated temporal course of dynamic PET with in vivo measurements. By modifying the Kelly model to a FLOD model, the computational simulation was able to approach the dynamic [ 18 F]FMISO measurements in the investigated

  7. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  8. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  9. Fractured reservoir history matching improved based on artificial intelligent

    Directory of Open Access Journals (Sweden)

    Sayyed Hadi Riazi

    2016-12-01

    Full Text Available In this paper, a new robust approach based on Least Square Support Vector Machine (LSSVM as a proxy model is used for an automatic fractured reservoir history matching. The proxy model is made to model the history match objective function (mismatch values based on the history data of the field. This model is then used to minimize the objective function through Particle Swarm Optimization (PSO and Imperialist Competitive Algorithm (ICA. In automatic history matching, sensitive analysis is often performed on full simulation model. In this work, to get new range of the uncertain parameters (matching parameters in which the objective function has a minimum value, sensitivity analysis is also performed on the proxy model. By applying the modified ranges to the optimization methods, optimization of the objective function will be faster and outputs of the optimization methods (matching parameters are produced in less time and with high precision. This procedure leads to matching of history of the field in which a set of reservoir parameters is used. The final sets of parameters are then applied for the full simulation model to validate the technique. The obtained results show that the present procedure in this work is effective for history matching process due to its robust dependability and fast convergence speed. Due to high speed and need for small data sets, LSSVM is the best tool to build a proxy model. Also the comparison of PSO and ICA shows that PSO is less time-consuming and more effective.

  10. Detection and Counting of Orchard Trees from Vhr Images Using a Geometrical-Optical Model and Marked Template Matching

    Science.gov (United States)

    Maillard, Philippe; Gomes, Marília F.

    2016-06-01

    This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

  11. Using medical history embedded in biometrics medical card for user identity authentication: privacy preserving authentication model by features matching.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan

    2012-01-01

    Many forms of biometrics have been proposed and studied for biometrics authentication. Recently researchers are looking into longitudinal pattern matching that based on more than just a singular biometrics; data from user's activities are used to characterise the identity of a user. In this paper we advocate a novel type of authentication by using a user's medical history which can be electronically stored in a biometric security card. This is a sequel paper from our previous work about defining abstract format of medical data to be queried and tested upon authentication. The challenge to overcome is preserving the user's privacy by choosing only the useful features from the medical data for use in authentication. The features should contain less sensitive elements and they are implicitly related to the target illness. Therefore exchanging questions and answers about a few carefully chosen features in an open channel would not easily or directly expose the illness, but yet it can verify by inference whether the user has a record of it stored in his smart card. The design of a privacy preserving model by backward inference is introduced in this paper. Some live medical data are used in experiments for validation and demonstration.

  12. Using Medical History Embedded in Biometrics Medical Card for User Identity Authentication: Privacy Preserving Authentication Model by Features Matching

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2012-01-01

    Full Text Available Many forms of biometrics have been proposed and studied for biometrics authentication. Recently researchers are looking into longitudinal pattern matching that based on more than just a singular biometrics; data from user’s activities are used to characterise the identity of a user. In this paper we advocate a novel type of authentication by using a user’s medical history which can be electronically stored in a biometric security card. This is a sequel paper from our previous work about defining abstract format of medical data to be queried and tested upon authentication. The challenge to overcome is preserving the user’s privacy by choosing only the useful features from the medical data for use in authentication. The features should contain less sensitive elements and they are implicitly related to the target illness. Therefore exchanging questions and answers about a few carefully chosen features in an open channel would not easily or directly expose the illness, but yet it can verify by inference whether the user has a record of it stored in his smart card. The design of a privacy preserving model by backward inference is introduced in this paper. Some live medical data are used in experiments for validation and demonstration.

  13. Matching by Monotonic Tone Mapping.

    Science.gov (United States)

    Kovacs, Gyorgy

    2018-06-01

    In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.

  14. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  15. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  16. Targeted Therapy Database (TTD): a model to match patient's molecular profile with current knowledge on cancer biology.

    Science.gov (United States)

    Mocellin, Simone; Shrager, Jeff; Scolyer, Richard; Pasquali, Sandro; Verdi, Daunia; Marincola, Francesco M; Briarava, Marta; Gobbel, Randy; Rossi, Carlo; Nitti, Donato

    2010-08-10

    The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. We created a manually annotated database (Targeted Therapy Database, TTD) where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method developed to fully exploit the available knowledge on cancer biology with the

  17. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  18. Valorisation of urban elements through 3D models generated from image matching point clouds and augmented reality visualization based in mobile platforms

    Science.gov (United States)

    Marques, Luís.; Roca Cladera, Josep; Tenedório, José António

    2017-10-01

    The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.

  19. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  20. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  2. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  3. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  4. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  5. Ontology-based composition and matching for dynamic service coordination

    OpenAIRE

    Pahl, Claus; Gacitua-Decar, Veronica; Wang, MingXue; Yapa Bandara, Kosala

    2011-01-01

    Service engineering needs to address integration problems allowing services to collaborate and coordinate. The need to address dynamic automated changes - caused by on-demand environments and changing requirements - can be addressed through service coordination based on ontology-based composition and matching techniques. Our solution to composition and matching utilises a service coordination space that acts as a passive infrastructure for collaboration. We discuss the information models an...

  6. A new cerebral vasospasm model established with endovascular puncture technique

    International Nuclear Information System (INIS)

    Tu Jianfei; Liu Yizhi; Ji Jiansong; Zhao Zhongwei

    2011-01-01

    Objective: To investigate the method of establishing cerebral vasospasm (CVS) models in rabbits by using endovascular puncture technique. Methods: Endovascular puncture procedure was performed in 78 New Zealand white rabbits to produce subarachnoid hemorrhage (SAH). The survival rabbits were randomly divided into seven groups (3 h, 12 h, 1 d, 2 d, 3 d, 7 d and 14 d), with five rabbits in each group for both study group (SAH group) and control group. Cerebral CT scanning was carried out in all rabbits both before and after the operation. The inner diameter and the thickness of vascular wall of both posterior communicating artery (PcoA) and basilar artery (BA) were determined after the animals were sacrificed, and the results were analyzed. Results: Of 78 experimental rabbits, CVS model was successfully established in 45, including 35 of SAH group and 10 control subgroup. The technical success rate was 57.7%. Twelve hours after the procedure, the inner diameter of PcoA and BA in SAH group was decreased by 45.6% and 52.3%, respectively, when compared with these in control group. The vascular narrowing showed biphasic changes, the inner diameter markedly decreased again at the 7th day when the decrease reached its peak to 31.2% and 48.6%, respectively. Conclusion: Endovascular puncture technique is an effective method to establish CVS models in rabbits. The death rate of experimental animals can be decreased if new interventional material is used and the manipulation is carefully performed. (authors)

  7. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  8. Universal or Specific? A Modeling-Based Comparison of Broad-Spectrum Influenza Vaccines against Conventional, Strain-Matched Vaccines.

    Directory of Open Access Journals (Sweden)

    Rahul Subramanian

    2016-12-01

    Full Text Available Despite the availability of vaccines, influenza remains a major public health challenge. A key reason is the virus capacity for immune escape: ongoing evolution allows the continual circulation of seasonal influenza, while novel influenza viruses invade the human population to cause a pandemic every few decades. Current vaccines have to be updated continually to keep up to date with this antigenic change, but emerging 'universal' vaccines-targeting more conserved components of the influenza virus-offer the potential to act across all influenza A strains and subtypes. Influenza vaccination programmes around the world are steadily increasing in their population coverage. In future, how might intensive, routine immunization with novel vaccines compare against similar mass programmes utilizing conventional vaccines? Specifically, how might novel and conventional vaccines compare, in terms of cumulative incidence and rates of antigenic evolution of seasonal influenza? What are their potential implications for the impact of pandemic emergence? Here we present a new mathematical model, capturing both transmission dynamics and antigenic evolution of influenza in a simple framework, to explore these questions. We find that, even when matched by per-dose efficacy, universal vaccines could dampen population-level transmission over several seasons to a greater extent than conventional vaccines. Moreover, by lowering opportunities for cross-protective immunity in the population, conventional vaccines could allow the increased spread of a novel pandemic strain. Conversely, universal vaccines could mitigate both seasonal and pandemic spread. However, where it is not possible to maintain annual, intensive vaccination coverage, the duration and breadth of immunity raised by universal vaccines are critical determinants of their performance relative to conventional vaccines. In future, conventional and novel vaccines are likely to play complementary roles in

  9. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  10. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  11. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  12. An Incentive Theory of Matching

    OpenAIRE

    Brown, Alessio J. G.; Merkl, Christian; Snower, Dennis J.

    2010-01-01

    This paper examines the labour market matching process by distinguishing its two component stages: the contact stage, in which job searchers make contact with employers and the selection stage, in which they decide whether to match. We construct a theoretical model explaining two-sided selection through microeconomic incentives. Firms face adjustment costs in responding to heterogeneous variations in the characteristics of workers and jobs. Matches and separations are described through firms'...

  13. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  14. A Continuous Dynamic Traffic Assignment Model From Plate Scanning Technique

    Energy Technology Data Exchange (ETDEWEB)

    Rivas, A.; Gallego, I.; Sanchez-Cambronero, S.; Ruiz-Ripoll, L.; Barba, R.M.

    2016-07-01

    This paper presents a methodology for the dynamic estimation of traffic flows on all links of a network from observable field data assuming the first-in-first-out (FIFO) hypothesis. The traffic flow intensities recorded at the exit of the scanned links are propagated to obtain the flow waves on unscanned links. For that, the model calculates the flow-cost functions through information registered with the plate scanning technique. The model also responds to the concern about the parameter quality of flow-cost functions to replicate the real traffic flow behaviour. It includes a new algorithm for the adjustment of the parameter values to link characteristics when its quality is questionable. For that, it is necessary the a priori study of the location of the scanning devices to identify all path flows and to measure travel times in all links. A synthetic network is used to illustrate the proposed method and to prove its usefulness and feasibility. (Author)

  15. Targeted Therapy Database (TTD: a model to match patient's molecular profile with current knowledge on cancer biology.

    Directory of Open Access Journals (Sweden)

    Simone Mocellin

    Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method

  16. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  17. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  18. Translation Techniques

    OpenAIRE

    Marcia Pinheiro

    2015-01-01

    In this paper, we discuss three translation techniques: literal, cultural, and artistic. Literal translation is a well-known technique, which means that it is quite easy to find sources on the topic. Cultural and artistic translation may be new terms. Whilst cultural translation focuses on matching contexts, artistic translation focuses on matching reactions. Because literal translation matches only words, it is not hard to find situations in which we should not use this technique.  Because a...

  19. Study of hydrogen-molecule guests in type II clathrate hydrates using a force-matched potential model parameterised from ab initio molecular dynamics

    Science.gov (United States)

    Burnham, Christian J.; Futera, Zdenek; English, Niall J.

    2018-03-01

    The force-matching method has been applied to parameterise an empirical potential model for water-water and water-hydrogen intermolecular interactions for use in clathrate-hydrate simulations containing hydrogen guest molecules. The underlying reference simulations constituted ab initio molecular dynamics (AIMD) of clathrate hydrates with various occupations of hydrogen-molecule guests. It is shown that the resultant model is able to reproduce AIMD-derived free-energy curves for the movement of a tagged hydrogen molecule between the water cages that make up the clathrate, thus giving us confidence in the model. Furthermore, with the aid of an umbrella-sampling algorithm, we calculate barrier heights for the force-matched model, yielding the free-energy barrier for a tagged molecule to move between cages. The barrier heights are reasonably large, being on the order of 30 kJ/mol, and are consistent with our previous studies with empirical models [C. J. Burnham and N. J. English, J. Phys. Chem. C 120, 16561 (2016) and C. J. Burnham et al., Phys. Chem. Chem. Phys. 19, 717 (2017)]. Our results are in opposition to the literature, which claims that this system may have very low barrier heights. We also compare results to that using the more ad hoc empirical model of Alavi et al. [J. Chem. Phys. 123, 024507 (2005)] and find that this model does very well when judged against the force-matched and ab initio simulation data.

  20. Pattern recognition and string matching

    CERN Document Server

    Cheng, Xiuzhen

    2002-01-01

    The research and development of pattern recognition have proven to be of importance in science, technology, and human activity. Many useful concepts and tools from different disciplines have been employed in pattern recognition. Among them is string matching, which receives much theoretical and practical attention. String matching is also an important topic in combinatorial optimization. This book is devoted to recent advances in pattern recognition and string matching. It consists of twenty eight chapters written by different authors, addressing a broad range of topics such as those from classifica­ tion, matching, mining, feature selection, and applications. Each chapter is self-contained, and presents either novel methodological approaches or applications of existing theories and techniques. The aim, intent, and motivation for publishing this book is to pro­ vide a reference tool for the increasing number of readers who depend upon pattern recognition or string matching in some way. This includes student...

  1. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  2. Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques

    Science.gov (United States)

    Clancy, Thomas C.; Gates, Thomas S.

    2005-01-01

    The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.

  3. Probabilistic Matching of Deidentified Data From a Trauma Registry and a Traumatic Brain Injury Model System Center: A Follow-up Validation Study.

    Science.gov (United States)

    Kumar, Raj G; Wang, Zhensheng; Kesinger, Matthew R; Newman, Mark; Huynh, Toan T; Niemeier, Janet P; Sperry, Jason L; Wagner, Amy K

    2018-04-01

    In a previous study, individuals from a single Traumatic Brain Injury Model Systems and trauma center were matched using a novel probabilistic matching algorithm. The Traumatic Brain Injury Model Systems is a multicenter prospective cohort study containing more than 14,000 participants with traumatic brain injury, following them from inpatient rehabilitation to the community over the remainder of their lifetime. The National Trauma Databank is the largest aggregation of trauma data in the United States, including more than 6 million records. Linking these two databases offers a broad range of opportunities to explore research questions not otherwise possible. Our objective was to refine and validate the previous protocol at another independent center. An algorithm generation and validation data set were created, and potential matches were blocked by age, sex, and year of injury; total probabilistic weight was calculated based on of 12 common data fields. Validity metrics were calculated using a minimum probabilistic weight of 3. The positive predictive value was 98.2% and 97.4% and sensitivity was 74.1% and 76.3%, in the algorithm generation and validation set, respectively. These metrics were similar to the previous study. Future work will apply the refined probabilistic matching algorithm to the Traumatic Brain Injury Model Systems and the National Trauma Databank to generate a merged data set for clinical traumatic brain injury research use.

  4. Platform pricing in matching markets

    NARCIS (Netherlands)

    Goos, M.; van Cayseele, P.; Willekens, B.

    2011-01-01

    This paper develops a simple model of monopoly platform pricing accounting for two pertinent features of matching markets. 1) The trading process is characterized by search and matching frictions implying limits to positive cross-side network effects and the presence of own-side congestion.

  5. Assessment of vulnerable plaque composition by matching the deformation of a parametric plaque model to measured plaque deformation.

    Science.gov (United States)

    Baldewsing, Radj A; Schaar, Johannes A; Mastik, Frits; Oomens, Cees W J; van der Steen, Antonius F W

    2005-04-01

    Intravascular ultrasound (IVUS) elastography visualizes local radial strain of arteries in so-called elastograms to detect rupture-prone plaques. However, due to the unknown arterial stress distribution these elastograms cannot be directly interpreted as a morphology and material composition image. To overcome this limitation we have developed a method that reconstructs a Young's modulus image from an elastogram. This method is especially suited for thin-cap fibroatheromas (TCFAs), i.e., plaques with a media region containing a lipid pool covered by a cap. Reconstruction is done by a minimization algorithm that matches the strain image output, calculated with a parametric finite element model (PFEM) representation of a TCFA, to an elastogram by iteratively updating the PFEM geometry and material parameters. These geometry parameters delineate the TCFA media, lipid pool and cap regions by circles. The material parameter for each region is a Young's modulus, EM, EL, and EC, respectively. The method was successfully tested on computer-simulated TCFAs (n = 2), one defined by circles, the other by tracing TCFA histology, and additionally on a physical phantom (n = 1) having a stiff wall (measured EM = 16.8 kPa) with an eccentric soft region (measured EL = 4.2 kPa). Finally, it was applied on human coronary plaques in vitro (n = 1) and in vivo (n = 1). The corresponding simulated and measured elastograms of these plaques showed radial strain values from 0% up to 2% at a pressure differential of 20, 20, 1, 20, and 1 mmHg respectively. The used/reconstructed Young's moduli [kPa] were for the circular plaque EL = 50/66, EM = 1500/1484, EC = 2000/2047, for the traced plaque EL = 25/1, EM = 1000/1148, EC = 1500/1491, for the phantom EL = 4.2/4 kPa, EM = 16.8/16, for the in vitro plaque EL = n.a./29, EM = n.a./647, EC = n.a./1784 kPa and for the in vivo plaque EL = n.a./2, EM = n.a./188, Ec = n.a./188 kPa.

  6. Three-dimensional biomechanical properties of human vocal folds: Parameter optimization of a numerical model to match in vitro dynamics

    Science.gov (United States)

    Yang, Anxiong; Berry, David A.; Kaltenbacher, Manfred; Döllinger, Michael

    2012-01-01

    The human voice signal originates from the vibrations of the two vocal folds within the larynx. The interactions of several intrinsic laryngeal muscles adduct and shape the vocal folds to facilitate vibration in response to airflow. Three-dimensional vocal fold dynamics are extracted from in vitro hemilarynx experiments and fitted by a numerical three-dimensional-multi-mass-model (3DM) using an optimization procedure. In this work, the 3DM dynamics are optimized over 24 experimental data sets to estimate biomechanical vocal fold properties during phonation. Accuracy of the optimization is verified by low normalized error (0.13 ± 0.02), high correlation (83% ± 2%), and reproducible subglottal pressure values. The optimized, 3DM parameters yielded biomechanical variations in tissue properties along the vocal fold surface, including variations in both the local mass and stiffness of vocal folds. That is, both mass and stiffness increased along the superior-to-inferior direction. These variations were statistically analyzed under different experimental conditions (e.g., an increase in tension as a function of vocal fold elongation and an increase in stiffness and a decrease in mass as a function of glottal airflow). The study showed that physiologically relevant vocal fold tissue properties, which cannot be directly measured during in vivo human phonation, can be captured using this 3D-modeling technique. PMID:22352511

  7. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  8. VLF surface-impedance modelling techniques for coal exploration

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.; Thiel, D.; O' Keefe, S. [Central Queensland University, Rockhampton, Qld. (Australia). Faculty of Engineering and Physical Systems

    2000-10-01

    New and efficient computational techniques are required for geophysical investigations of coal. This will allow automated inverse analysis procedures to be used for interpretation of field data. In this paper, a number of methods of modelling electromagnetic surface impedance measurements are reviewed, particularly as applied to typical coal seam geology found in the Bowen Basin. At present, the Impedance method and the finite-difference time-domain (FDTD) method appear to offer viable solutions although both have problems. The Impedance method is currently slightly inaccurate, and the FDTD method has large computational demands. In this paper both methods are described and results are presented for a number of geological targets. 17 refs., 14 figs.

  9. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  10. Meningkatkan Aktivitas Belajar Siswa dengan Menggunakan Model Make A Match Pada Mata Pelajaran Matematika di Kelas V SDN 050687 Sawit Seberang

    Directory of Open Access Journals (Sweden)

    Daitin Tarigan

    2014-06-01

    Full Text Available AbstrakPenelitian ini bertujuan untuk mengetahui aktivitas belajar siswa pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal dan sebaliknya dengan menggunakan model make a match di kelas V SD Negeri 050687 Sawit Seberang T.A 2013/2014. Jenis penelitian ini adalah Penelitian Tindakan Kelas (PTK dengan alat pengumpulan data yang digunakan adalah lembar observasi aktivitas guru dan siswa. Berdasarkan analisis data diperoleh hasil pada siklus I Pertemuan I skor aktivitas guru adalah 82,14 dengan kriteria baik dan aktivitas belajar dalah aktif. Tindakan dilanjutkan sampai dengan siklus ke II. Pada pertemuan II siklus II skor aktivitas guru adalah 96,42 dengan kriteria sangat baik dan aktivitas belajar klasikal adalah sangat aktif. Dari hasil tersebut dapat diambil kesimpulan bahwa tindakan penelitian berhasil karena nilai indikator aktivitas belajar siswa dan jumlah siswa yang dinyatakan aktif secara klasikal telah mencapai 80%. Dengan demikian maka penggunaan model make a match dapat meningkatkan aktivitas belajar siswa di kelas V SD Negeri 050687 Sawit Seberang pada mata pelajaran Matematika materi mengubah pecahan ke bentuk persen, desimal. Kata Kunci:      Model Make a Match; Aktivitas Belajar Siswa  AbstractThis reseach aim is to know the student activity on Math at topic change the fraction into percent, desimal and vice versa, using make a match model on fifth grade of SDN 050687 Sawit Seberang 2013/2014. This is a classroom action research which is used activity observrvation sheet as its instrumen of collecting data. From the analisys of data, it is got result as follows: on cycle I meet I, teacher activity score is 82,14, which was mean good, and learning activity was active. The action and then continued until second cycle. On the meet II cylce II, it was got teacher activity score is 96,42, which was mean very good, and clasical learning activity was very active. Based on the result, it was conclude

  11. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  12. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  13. Toward Practical Secure Stable Matching

    Directory of Open Access Journals (Sweden)

    Riazi M. Sadegh

    2017-01-01

    Full Text Available The Stable Matching (SM algorithm has been deployed in many real-world scenarios including the National Residency Matching Program (NRMP and financial applications such as matching of suppliers and consumers in capital markets. Since these applications typically involve highly sensitive information such as the underlying preference lists, their current implementations rely on trusted third parties. This paper introduces the first provably secure and scalable implementation of SM based on Yao’s garbled circuit protocol and Oblivious RAM (ORAM. Our scheme can securely compute a stable match for 8k pairs four orders of magnitude faster than the previously best known method. We achieve this by introducing a compact and efficient sub-linear size circuit. We even further decrease the computation cost by three orders of magnitude by proposing a novel technique to avoid unnecessary iterations in the SM algorithm. We evaluate our implementation for several problem sizes and plan to publish it as open-source.

  14. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  15. Using crosswell data to enhance history matching

    KAUST Repository

    Ravanelli, Fabio M.; Hoteit, Ibrahim

    2014-01-01

    of the reality. This problem is mitigated by conditioning the model with data through data assimilation, a process known in the oil industry as history matching. Several recent advances are being used to improve history matching reliability, notably the use

  16. REDUCING UNCERTAINTIES IN MODEL PREDICTIONS VIA HISTORY MATCHING OF CO2 MIGRATION AND REACTIVE TRANSPORT MODELING OF CO2 FATE AT THE SLEIPNER PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Chen

    2015-03-31

    An important question for the Carbon Capture, Storage, and Utility program is “can we adequately predict the CO2 plume migration?” For tracking CO2 plume development, the Sleipner project in the Norwegian North Sea provides more time-lapse seismic monitoring data than any other sites, but significant uncertainties still exist for some of the reservoir parameters. In Part I, we assessed model uncertainties by applying two multi-phase compositional simulators to the Sleipner Benchmark model for the uppermost layer (Layer 9) of the Utsira Sand and calibrated our model against the time-lapsed seismic monitoring data for the site from 1999 to 2010. Approximate match with the observed plume was achieved by introducing lateral permeability anisotropy, adding CH4 into the CO2 stream, and adjusting the reservoir temperatures. Model-predicted gas saturation, CO2 accumulation thickness, and CO2 solubility in brine—none were used as calibration metrics—were all comparable with the interpretations of the seismic data in the literature. In Part II & III, we evaluated the uncertainties of predicted long-term CO2 fate up to 10,000 years, due to uncertain reaction kinetics. Under four scenarios of the kinetic rate laws, the temporal and spatial evolution of CO2 partitioning into the four trapping mechanisms (hydrodynamic/structural, solubility, residual/capillary, and mineral) was simulated with ToughReact, taking into account the CO2-brine-rock reactions and the multi-phase reactive flow and mass transport. Modeling results show that different rate laws for mineral dissolution and precipitation reactions resulted in different predicted amounts of trapped CO2 by carbonate minerals, with scenarios of the conventional linear rate law for feldspar dissolution having twice as much mineral trapping (21% of the injected CO2) as scenarios with a Burch-type or Alekseyev et al.–type rate law for feldspar dissolution (11%). So far, most reactive transport modeling (RTM) studies for

  17. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    Science.gov (United States)

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  18. Matching of motor-sensory modality in the rodent femoral nerve model shows no enhanced effect on peripheral nerve regeneration

    Science.gov (United States)

    Kawamura, David H.; Johnson, Philip J.; Moore, Amy M.; Magill, Christina K.; Hunter, Daniel A.; Ray, Wilson Z.; Tung, Thomas HH.; Mackinnon, Susan E.

    2010-01-01

    The treatment of peripheral nerve injuries with nerve gaps largely consists of autologous nerve grafting utilizing sensory nerve donors. Underlying this clinical practice is the assumption that sensory autografts provide a suitable substrate for motoneuron regeneration, thereby facilitating motor endplate reinnervation and functional recovery. This study examined the role of nerve graft modality on axonal regeneration, comparing motor nerve regeneration through motor, sensory, and mixed nerve isografts in the Lewis rat. A total of 100 rats underwent grafting of the motor or sensory branch of the femoral nerve with histomorphometric analysis performed after 5, 6, or 7 weeks. Analysis demonstrated similar nerve regeneration in motor, sensory, and mixed nerve grafts at all three time points. These data indicate that matching of motor-sensory modality in the rat femoral nerve does not confer improved axonal regeneration through nerve isografts. PMID:20122927

  19. MATCHING IN INFORMAL FINANCIAL INSTITUTIONS.

    Science.gov (United States)

    Eeckhout, Jan; Munshi, Kaivan

    2010-09-01

    This paper analyzes an informal financial institution that brings heterogeneous agents together in groups. We analyze decentralized matching into these groups, and the equilibrium composition of participants that consequently arises. We find that participants sort remarkably well across the competing groups, and that they re-sort immediately following an unexpected exogenous regulatory change. These findings suggest that the competitive matching model might have applicability and bite in other settings where matching is an important equilibrium phenomenon. (JEL: O12, O17, G20, D40).

  20. Modelling of 3D fractured geological systems - technique and application

    Science.gov (United States)

    Cacace, M.; Scheck-Wenderoth, M.; Cherubini, Y.; Kaiser, B. O.; Bloecher, G.

    2011-12-01

    All rocks in the earth's crust are fractured to some extent. Faults and fractures are important in different scientific and industry fields comprising engineering, geotechnical and hydrogeological applications. Many petroleum, gas and geothermal and water supply reservoirs form in faulted and fractured geological systems. Additionally, faults and fractures may control the transport of chemical contaminants into and through the subsurface. Depending on their origin and orientation with respect to the recent and palaeo stress field as well as on the overall kinematics of chemical processes occurring within them, faults and fractures can act either as hydraulic conductors providing preferential pathways for fluid to flow or as barriers preventing flow across them. The main challenge in modelling processes occurring in fractured rocks is related to the way of describing the heterogeneities of such geological systems. Flow paths are controlled by the geometry of faults and their open void space. To correctly simulate these processes an adequate 3D mesh is a basic requirement. Unfortunately, the representation of realistic 3D geological environments is limited by the complexity of embedded fracture networks often resulting in oversimplified models of the natural system. A technical description of an improved method to integrate generic dipping structures (representing faults and fractures) into a 3D porous medium is out forward. The automated mesh generation algorithm is composed of various existing routines from computational geometry (e.g. 2D-3D projection, interpolation, intersection, convex hull calculation) and meshing (e.g. triangulation in 2D and tetrahedralization in 3D). All routines have been combined in an automated software framework and the robustness of the approach has been tested and verified. These techniques and methods can be applied for fractured porous media including fault systems and therefore found wide applications in different geo-energy related

  1. The effects of soil-structure interaction modeling techniques on in-structure response spectra

    International Nuclear Information System (INIS)

    Johnson, J.J.; Wesley, D.A.; Almajan, I.T.

    1977-01-01

    The structure considered for this investigation consisted of the reactor containment building (RCB) and prestressed concrete reactor vessel (PCRV) for a HTGR plant. A conventional lumped-mass dynamic model in three dimensions was used in the study. The horizontal and vertical response, which are uncoupled due to the symmetry of the structure, were determined for horizontal and vertical excitation. Five different site conditions ranging from competent rock to a soft soil site were considered. The simplified approach to the overall plant analysis utilized stiffness proportional composite damping with a limited amount of soil damping consistent with US NRC regulatory guidelines. Selected cases were also analyzed assuming a soil damping value approximating the theoretical value. The results from the simplified approach were compared to those determined by rigorously coupling the structure to a frequency independent half-space representation of the soil. Finally, equivalent modal damping ratios were found by matching the frequency response at a point within the coupled soil-structure system determined by solution of the coupled and uncoupled equations of motion. The basis for comparison of the aforementioned techniques was the response spectra at selected locations within the soil-structure system. Each of the five site conditions was analyzed and in-structure response spectra were generated. The response spectra were combined to form a design envelope which encompasses the entire range of site parameters. Both the design envelopes and the site-by-site results were compared

  2. Behavioral technique for workflow abstraction and matching

    NARCIS (Netherlands)

    Klai, K.; Ould Ahmed M'bareck, N.; Tata, S.; Dustdar, S.; Fiadeiro, J.L.; Sheth, A.

    2006-01-01

    This work is in line with the CoopFlow approach dedicated for workflow advertisement, interconnection, and cooperation in virtual organizations. In order to advertise workflows into a registry, we present in this paper a novel method to abstract behaviors of workflows into symbolic observation

  3. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  4. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  5. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  6. Modelling skin penetration using the Laplace transform technique.

    Science.gov (United States)

    Anissimov, Y G; Watkinson, A

    2013-01-01

    The Laplace transform is a convenient mathematical tool for solving ordinary and partial differential equations. The application of this technique to problems arising in drug penetration through the skin is reviewed in this paper. © 2013 S. Karger AG, Basel.

  7. Stinging Insect Matching Game

    Science.gov (United States)

    ... for Kids ▸ Stinging Insect Matching Game Share | Stinging Insect Matching Game Stinging insects can ruin summer fun for those who are ... the difference between the different kinds of stinging insects in order to keep your summer safe and ...

  8. Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array

    Directory of Open Access Journals (Sweden)

    Park Sungchan

    2011-01-01

    Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.

  9. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  10. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  11. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  12. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  13. Plasma focus matching conditions

    International Nuclear Information System (INIS)

    Soliman, H.M.; Masoud, M.M.; Elkhalafawy, T.A.

    1988-01-01

    A snow-plough and slug models have been used to obtain the optimum matching conditions of the plasma in the focus. The dimensions of the plasma focus device are, inner electrode radius = 2 cm, outer electrode radius = 5.5 cm, and its length = 8 cm. It was found that the maximum magnetic energy of 12.26 kJ has to be delivered to plasma focus whose density is 10 19 /cm 3 at focusing time of 2.55 μs and with total external inductance of 24.2 n H. The same method is used to evaluate the optimum matching conditions for the previous coaxial discharge system which had inner electrode radius = 1.6 cm, outer electrode radius = 3.3 cm and its length = 31.5 cm. These conditions are charging voltage = 12 kV, capacity of the condenser bank = 430 μf, plasma focus density = 10 19 /cm 3 focusing time = 8 μs and total external inductance = 60.32 n H.3 fig., 2 tab

  14. A Study of Reverse-Worded Matched Item Pairs Using the Generalized Partial Credit and Nominal Response Models

    Science.gov (United States)

    Matlock Cole, Ki Lynn; Turner, Ronna C.; Gitchel, W. Dent

    2018-01-01

    The generalized partial credit model (GPCM) is often used for polytomous data; however, the nominal response model (NRM) allows for the investigation of how adjacent categories may discriminate differently when items are positively or negatively worded. Ten items from three different self-reported scales were used (anxiety, depression, and…

  15. Modeling seismic wave propagation across the European plate: structural models and numerical techniques, state-of-the-art and prospects

    Science.gov (United States)

    Morelli, Andrea; Danecek, Peter; Molinari, Irene; Postpischl, Luca; Schivardi, Renata; Serretti, Paola; Tondi, Maria Rosaria

    2010-05-01

    beneath the Alpine mobile belt, and fast lithospheric signatures under the two main Mediterranean subduction systems (Aegean and Tyrrhenian). We validate this new model through comparison of recorded seismograms with simulations based on numerical codes (SPECFEM3D). To ease and increase model usage, we also propose the adoption of a common exchange format for tomographic earth models based on JSON, a lightweight data-interchange format supported by most high-level programming languages, and provide tools for manipulating and visualising models, described in this standard format, in Google Earth and GEON IDV. In the next decade seismologists will be able to reap new possibilities offered by exciting progress in general computing power and algorithmic development in computational seismology. Structural models, still based on classical approaches and modeling just few parameters in each seismogram, will benefit from emerging techniques - such as full waveform fitting and fully nonlinear inversion - that are now just showing their potential. This will require extensive availability of supercomputing resources to earth scientists in Europe, as a tool to match the planned new massive data flow. We need to make sure that the whole apparatus, needed to fully exploit new data, will be widely accessible. To maximize the development, so as for instance to enable us to promptly model ground shaking after a major earthquake, we will also need a better coordination framework, that will enable us to share and amalgamate the abundant local information on earth structure - most often available but difficult to retrieve, merge and use. Comprehensive knowledge of earth structure and of best practices to model wave propagation can by all means be considered an enabling technology for further geophysical progress.

  16. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  17. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  18. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  19. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    Science.gov (United States)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  20. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  1. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  2. Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters

    Science.gov (United States)

    Barnier, G.; Dunham, E. M.

    2016-12-01

    Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.

  3. Best matching theory & applications

    CERN Document Server

    Moghaddam, Mohsen

    2017-01-01

    Mismatch or best match? This book demonstrates that best matching of individual entities to each other is essential to ensure smooth conduct and successful competitiveness in any distributed system, natural and artificial. Interactions must be optimized through best matching in planning and scheduling, enterprise network design, transportation and construction planning, recruitment, problem solving, selective assembly, team formation, sensor network design, and more. Fundamentals of best matching in distributed and collaborative systems are explained by providing: § Methodical analysis of various multidimensional best matching processes § Comprehensive taxonomy, comparing different best matching problems and processes § Systematic identification of systems’ hierarchy, nature of interactions, and distribution of decision-making and control functions § Practical formulation of solutions based on a library of best matching algorithms and protocols, ready for direct applications and apps development. Design...

  4. Matching Students to Schools

    Directory of Open Access Journals (Sweden)

    Dejan Trifunovic

    2017-08-01

    Full Text Available In this paper, we present the problem of matching students to schools by using different matching mechanisms. This market is specific since public schools are free and the price mechanism cannot be used to determine the optimal allocation of children in schools. Therefore, it is necessary to use different matching algorithms that mimic the market mechanism and enable us to determine the core of the cooperative game. In this paper, we will determine that it is possible to apply cooperative game theory in matching problems. This review paper is based on illustrative examples aiming to compare matching algorithms in terms of the incentive compatibility, stability and efficiency of the matching. In this paper we will present some specific problems that may occur in matching, such as improving the quality of schools, favoring minority students, the limited length of the list of preferences and generating strict priorities from weak priorities.

  5. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  6. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  7. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  8. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  9. SU-D-BRE-03: Dosimetric Impact of In-Air Spot Size Variations for Commissioning a Room-Matched Beam Model for Pencil Beam Scanning Proton Therapy

    International Nuclear Information System (INIS)

    Zhang, Y; Giebeler, A; Mascia, A; Piskulich, F; Perles, L; Lepage, R; Dong, L

    2014-01-01

    Purpose: To quantitatively evaluate dosimetric consequence of spot size variations and validate beam-matching criteria for commissioning a pencil beam model for multiple treatment rooms. Methods: A planning study was first conducted by simulating spot size variations to systematically evaluate dosimetric impact of spot size variations in selected cases, which was used to establish the in-air spot size tolerance for beam matching specifications. A beam model in treatment planning system was created using in-air spot profiles acquired in one treatment room. These spot profiles were also acquired from another treatment room for assessing the actual spot size variations between the two treatment rooms. We created twenty five test plans with targets of different sizes at different depths, and performed dose measurement along the entrance, proximal and distal target regions. The absolute doses at those locations were measured using ionization chambers at both treatment rooms, and were compared against the calculated doses by the beam model. Fifteen additional patient plans were also measured and included in our validation. Results: The beam model is relatively insensitive to spot size variations. With an average of less than 15% measured in-air spot size variations between two treatment rooms, the average dose difference was −0.15% with a standard deviation of 0.40% for 55 measurement points within target region; but the differences increased to 1.4%±1.1% in the entrance regions, which are more affected by in-air spot size variations. Overall, our single-room based beam model in the treatment planning system agreed with measurements in both rooms < 0.5% within the target region. For fifteen patient cases, the agreement was within 1%. Conclusion: We have demonstrated that dosimetrically equivalent machines can be established when in-air spot size variations are within 15% between the two treatment rooms

  10. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  11. Matching the results of a theoretical model with failure rates obtained from a population of non-nuclear pressure vessels

    International Nuclear Information System (INIS)

    Harrop, L.P.

    1982-02-01

    Failure rates for non-nuclear pressure vessel populations are often regarded as showing a decrease with time. Empirical evidence can be cited which supports this view. On the other hand theoretical predictions of PWR type reactor pressure vessel failure rates have shown an increasing failure rate with time. It is shown that these two situations are not necessarily incompatible. If adjustments are made to the input data of the theoretical model to treat a non-nuclear pressure vessel population, the model can produce a failure rate which decreases with time. These adjustments are explained and the results obtained are shown. (author)

  12. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  13. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  14. Longitudinal as well as age-matched assessments of bone changes in the mature ovariectomized rat model

    NARCIS (Netherlands)

    Leitner, M.M.; Tami, A.E.; Montavon, P.M.; Ito, K.

    2009-01-01

    In the past, bone loss in the ovariectomized (OVX) osteoporotic rat model has been monitored using in vitro micro-computed tomography (micro-CT) to assess bone structure (bone volume/total volume, BV/TV). The purpose of this study was to assess the importance of baseline control and sham groups in

  15. PENGARUH MODEL PEMBELAJARAN ASSURANCE, RELEVANCE, INTEREST, ASSESSMENT, SATISFACTION DENGAN STRATEGI ACTIVE LEARNING TIPE INDEX CARD MATCH TERHADAP KEMAMPUAN PEMECAHAN MASALAH MATEMATIK SISWA SMA

    Directory of Open Access Journals (Sweden)

    Frasticha Frasticha

    2016-08-01

    Full Text Available Pemecahan masalah merupakan kegiatan matematika yang sulit baik dalam mempelajari maupun mengajarkannya, sehingga diperlukan adanya suatu model pembelajaran yang dapat memberikan pengaruh positif terhadap kemampuan pemecahan masalah siswa. Salah satu model pembelajaran yang dapat digunakan yaitu model pembelajaran ARIAS dengan strategi active learning tipe ICM. Penelitian ini bertujuan untuk mengetahui: (1 model pembelajaran ARIAS dengan strategi active learning tipe ICM berpengaruh terhadap kemampuan pemecahan masalah matematik siswa SMA; (2 Sikap siswa terhadap pembelajaran matematika menggunakan model pembelajaran ARIAS dengan strategi active learning tipe ICM. Subjek penelitian ini adalah siswa kelas XI IPA 1 dengan jumlah 38 siswa sebagai kelas kontrol dan XI IPA 2 dengan jumlah 39 siswa sebagai kelas eksperimen di SMAN 19 Kabupaten Tangerang pada tahun ajaran 2015-2016. Metode penelitian yang digunakan adalah metode penelitian eksperimen dengan adalah desain kuasi eksperimen dengan bentuk Nonequivalent Control Group serta Cluster Sampling sebagai teknik pengambilan sampel. Analisis data dalam penelitian ini menggunakan SPSS Statistics Version 22. Hasil penelitian :(1 model pembelajaran ARIAS dengan strategi active learning tipe ICM berpengaruh terhadap kemampuan pemecahan masalah matematik siswa SMA dan memberikan pengaruh yang positif; (2 sikap siswa positif terhadap model pembelajaran ARIAS dengan strategi active learning tipe ICM. Kata Kunci: Assurance Relevance Interest Assessment Satisfaction, Index Card Match, Kemampuan Pemecahan Masalah

  16. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    Science.gov (United States)

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  17. Development of mathematical techniques for the assimilation of remote sensing data into atmospheric models

    International Nuclear Information System (INIS)

    Seinfeld, J.H.

    1982-01-01

    The problem of the assimilation of remote sensing data into mathematical models of atmospheric pollutant species was investigated. The data assimilation problem is posed in terms of the matching of spatially integrated species burden measurements to the predicted three-dimensional concentration fields from atmospheric diffusion models. General conditions were derived for the reconstructability of atmospheric concentration distributions from data typical of remote sensing applications, and a computational algorithm (filter) for the processing of remote sensing data was developed

  18. Development of mathematical techniques for the assimilation of remote sensing data into atmospheric models

    International Nuclear Information System (INIS)

    Seinfeld, J.H.

    1982-01-01

    The problem of the assimilation of remote sensing data into mathematical models of atmospheric pollutant species was investigated. The problem is posed in terms of the matching of spatially integrated species burden measurements to the predicted three dimensional concentration fields from atmospheric diffusion models. General conditions are derived for the reconstructability of atmospheric concentration distributions from data typical of remote sensing applications, and a computational algorithm (filter) for the processing of remote sensing data is developed

  19. Frequency doubling in poled polymers using anomalous dispersion phase-matching

    Energy Technology Data Exchange (ETDEWEB)

    Kowalczyk, T.C.; Singer, K.D. [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Physics; Cahill, P.A. [Sandia National Labs., Albuquerque, NM (United States)

    1995-10-01

    The authors report on a second harmonic generation in a poled polymer waveguide using anomalous dispersion phase-matching. Blue light ({lambda} = 407 nm) was produced by phase-matching the lowest order fundamental and harmonic modes over a distance of 32 {micro}m. The experimental conversion efficiency was {eta} = 1.2 {times} 10{sup {minus}4}, in agreement with theory. Additionally, they discuss a method of enhancing the conversion efficiency for second harmonic generation using anomalous dispersion phase-matching to optimize Cerenkov second harmonic generation. The modeling shows that a combination of phase-matching techniques creates larger conversion efficiencies and reduces critical fabrication requirements of the individual phase-matching techniques.

  20. Probabilistic seismic history matching using binary images

    Science.gov (United States)

    Davolio, Alessandra; Schiozer, Denis Jose

    2018-02-01

    Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new

  1. Modeling rainfall-runoff process using soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  2. An Implementation of the Frequency Matching Method

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer

    During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One of these......During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One...... of these methods is the recently proposed Frequency Matching method to compute the maximum a posteriori model of an inverse problem where multiple-point statistics, learned from a training image, is used to formulate a closed form expression for an a priori probability density function. This paper discusses...... aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method...

  3. New Cosmic Center Universe Model Matches Eight of Big Bang's Major Predictions Without The F-L Paradigm

    CERN Document Server

    Gentry, R V

    2003-01-01

    Accompanying disproof of the F-L expansion paradigm eliminates the basis for expansion redshifts, which in turn eliminates the basis for the Cosmological Principle. The universe is not the same everywhere. Instead the spherical symmetry of the cosmos demanded by the Hubble redshift relation proves the universe is isotropic about a nearby universal Center. This is the foundation of the relatively new Cosmic Center Universe (CCU) model, which accounts for, explains, or predicts: (i) The Hubble redshift relation, (ii) a CBR redshift relation that fits all current CBR measurements, (iii) the recently discovered velocity dipole distribution of radiogalaxies, (iv) the well-known time dilation of SNeIa light curves, (v) the Sunyaev-Zeldovich thermal effect, (vi) Olber's paradox, (vii) SN dimming for z 1 an enhanced brightness that fits SN 1997ff measurements, (ix) the existence of extreme redshift (z > 10) objects which, when observed, will further distinguish it from the big bang. The CCU model also plausibly expl...

  4. Virtual planning of complex head and neck reconstruction results in satisfactory match between real outcomes and virtual models.

    Science.gov (United States)

    Hanken, Henning; Schablowsky, Clemens; Smeets, Ralf; Heiland, Max; Sehner, Susanne; Riecke, Björn; Nourwali, Ibrahim; Vorwig, Oliver; Gröbe, Alexander; Al-Dam, Ahmed

    2015-04-01

    The reconstruction of large facial bony defects using microvascular transplants requires extensive surgery to achieve full rehabilitation of form and function. The purpose of this study is to measure the agreement between virtual plans and the actual results of maxillofacial reconstruction. This retrospective cohort study included 30 subjects receiving maxillofacial reconstruction with a preoperative virtual planning. Parameters including defect size, position, angle and volume of the transplanted segments were compared between the virtual plan and the real outcome using paired t test. A total of 63 bone segments were transplanted. The mean differences between the virtual planning and the postoperative situation were for the defect sizes 1.17 mm (95 % confidence interval (CI) (-.21 to 2.56 mm); p = 0.094), for the resection planes 1.69 mm (95 % CI (1.26-2.11); p = 0.033) and 10.16° (95 % CI (8.36°-11.96°); p satisfactory postoperative results are the basis for an optimal functional and aesthetic reconstruction in a single surgical procedure. The technique should be further investigated in larger study populations and should be further improved.

  5. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  6. Validation techniques of agent based modelling for geospatial simulations

    OpenAIRE

    Darvishi, M.; Ahmadi, G.

    2014-01-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent...

  7. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  8. Modeling and Control of Multivariable Process Using Intelligent Techniques

    Directory of Open Access Journals (Sweden)

    Subathra Balasubramanian

    2010-10-01

    Full Text Available For nonlinear dynamic systems, the first principles based modeling and control is difficult to implement. In this study, a fuzzy controller and recurrent fuzzy controller are developed for MIMO process. Fuzzy logic controller is a model free controller designed based on the knowledge about the process. In fuzzy controller there are two types of rule-based fuzzy models are available: one the linguistic (Mamdani model and the other is Takagi–Sugeno model. Of these two, Takagi-Sugeno model (TS has attracted most attention. The fuzzy controller application is limited to static processes due to their feedforward structure. But, most of the real-time processes are dynamic and they require the history of input/output data. In order to store the past values a memory unit is needed, which is introduced by the recurrent structure. The proposed recurrent fuzzy structure is used to develop a controller for the two tank heating process. Both controllers are designed and implemented in a real time environment and their performance is compared.

  9. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  10. Computational modelling of the HyperVapotron cooling technique

    Energy Technology Data Exchange (ETDEWEB)

    Milnes, Joseph, E-mail: Joe.Milnes@ccfe.ac.uk [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Burns, Alan [School of Process Material and Environmental Engineering, CFD Centre, University of Leeds, Leeds, LS2 9JT (United Kingdom); ANSYS UK, Milton Park, Oxfordshire (United Kingdom); Drikakis, Dimitris [Department of Engineering Physics, Cranfield University, Cranfield, MK43 0AL (United Kingdom)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer The heat transfer mechanisms within a HyperVapotron are examined. Black-Right-Pointing-Pointer A multiphase, CFD model is developed. Black-Right-Pointing-Pointer Modelling choices for turbulence and wall boiling are evaluated. Black-Right-Pointing-Pointer Considerable improvements in accuracy are found compared to standard boiling models. Black-Right-Pointing-Pointer The model should enable significant virtual prototyping to be performed. - Abstract: Efficient heat transfer technologies are essential for magnetically confined fusion reactors; this applies to both the current generation of experimental reactors as well as future power plants. A number of High Heat Flux devices have therefore been developed specifically for this application. One of the most promising candidates is the HyperVapotron, a water cooled device which relies on internal fins and boiling heat transfer to maximise the heat transfer capability. Over the past 30 years, numerous variations of the HyperVapotron have been built and tested at fusion research centres around the globe resulting in devices that can now sustain heat fluxes in the region of 20-30 MW/m{sup 2} in steady state. Until recently, there had been few attempts to model or understand the internal heat transfer mechanisms responsible for this exceptional performance with the result that design improvements have been traditionally sought experimentally which is both inefficient and costly. This paper presents the successful attempt to develop an engineering model of the HyperVapotron device using customisation of commercial Computational Fluid Dynamics software. To establish the most appropriate modelling choices, in-depth studies were performed examining the turbulence models (within the Reynolds Averaged Navier Stokes framework), near wall methods, grid resolution and boiling submodels. Comparing the CFD solutions with HyperVapotron experimental data suggests that a RANS-based, multiphase

  11. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.

    Science.gov (United States)

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-01-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  12. QCD next-to-leading order predictions matched to parton showers for vector-like quark models

    CERN Document Server

    Fuks, Benjamin

    2017-02-27

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair-production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks...

  13. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  14. APPLICATION OF COOPERATIVE LEARNING MODEL INDEX CARD MATCH TYPE IN IMPROVING STUDENT LEARNING RESULTS ON COMPOSITION AND COMPOSITION FUNCTIONS OF FUNCTIONS INVERS IN MAN 1 MATARAM

    Directory of Open Access Journals (Sweden)

    Syahrir Syahrir

    2017-12-01

    Full Text Available Lack of student response in learning mathematics caused by passive of student in process of learning progress so that student consider mathematics subject is difficult subject to be understood. The research is Classroom Action Research (PTK using 2 cycles, then the purpose of this research is how the implementation of cooperative learning type of index card match in improving student learning outcomes on the subject matter of composition function and inverse function in MAN 1 Mataram. While the results of the analysis in the study showed that there is in cycle I obtained classical completeness 78.79% with the average score of student learning outcomes 69.78 and the average value of student learning responses with the category Enough, then in cycle II shows that classical thoroughness 87 , 89% with mean score of student learning result 78,94 and average value of student learning response with good category. So it can be concluded that the implementation of Model Cooperative Learning Type Index Card Match can improve student learning outcomes on the subject matter of composition function and inverse function.

  15. Anomalous dispersion enhanced Cerenkov phase-matching

    Energy Technology Data Exchange (ETDEWEB)

    Kowalczyk, T.C.; Singer, K.D. [Case Western Reserve Univ., Cleveland, OH (United States). Dept. of Physics; Cahill, P.A. [Sandia National Labs., Albuquerque, NM (United States)

    1993-11-01

    The authors report on a scheme for phase-matching second harmonic generation in polymer waveguides based on the use of anomalous dispersion to optimize Cerenkov phase matching. They have used the theoretical results of Hashizume et al. and Onda and Ito to design an optimum structure for phase-matched conversion. They have found that the use of anomalous dispersion in the design results in a 100-fold enhancement in the calculated conversion efficiency. This technique also overcomes the limitation of anomalous dispersion phase-matching which results from absorption at the second harmonic. Experiments are in progress to demonstrate these results.

  16. Territories typification technique with use of statistical models

    Science.gov (United States)

    Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.

    2018-05-01

    Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.

  17. Nuclear-fuel-cycle optimization: methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables

  18. Simplified Model Surgery Technique for Segmental Maxillary Surgeries

    Directory of Open Access Journals (Sweden)

    Namit Nagar

    2011-01-01

    Full Text Available Model surgery is the dental cast version of cephalometric prediction of surgical results. Patients having vertical maxillary excess with prognathism invariably require Lefort I osteotomy with maxillary segmentation and maxillary first premolar extractions during surgery. Traditionally, model surgeries in these cases have been done by sawing the model through the first premolar interproximal area and removing that segment. This clinical innovation employed the use of X-ray film strips as separators in maxillary first premolar interproximal area. The method advocated is a time-saving procedure where no special clinical or laboratory tools, such as plaster saw (with accompanying plaster dust, were required and reusable separators were made from old and discarded X-ray films.

  19. Matching global and regional distribution models of the recluse spider Loxosceles rufescens: to what extent do these reflect niche conservatism?

    Science.gov (United States)

    Taucare-Ríos, A; Nentwig, W; Bizama, G; Bustamante, R O

    2018-06-08

    The Mediterranean recluse spider, Loxosceles rufescens (Dufour, 1820) (Araneae: Sicariidae) is a cosmopolitan spider that has been introduced in many parts of the world. Its bite can be dangerous to humans. However, the potential distribution of this alien species, which is able to spread fairly quickly with human aid, is completely unknown. Using a combination of global and regional niche models, it is possible to analyse the spread of this species in relation to environmental conditions. This analysis found that the successful spreading of this species varies according to the region invaded. The majority of populations in Asia are stable and show niche conservatism, whereas in North America this spider is expected to be less successful in occupying niches that differ from those in its native region and that do not support its synanthropic way of living. © 2018 The Royal Entomological Society.

  20. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  1. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  2. Determining Plutonium Mass in Spent Fuel with Nondestructive Assay Techniques -- Preliminary Modeling Results Emphasizing Integration among Techniques

    International Nuclear Information System (INIS)

    Tobin, S.J.; Fensin, M.L.; Ludewigt, B.A.; Menlove, H.O.; Quiter, B.J.; Sandoval, N.P.; Swinhoe, M.T.; Thompson, S.J.

    2009-01-01

    There are a variety of motivations for quantifying Pu in spent (used) fuel assemblies by means of nondestructive assay (NDA) including the following: strengthen the capabilities of the International Atomic Energy Agencies to safeguards nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at reprocessing facilities and providing quantitative input to burnup credit determination for repositories. For the purpose of determining the Pu mass in spent fuel assemblies, twelve NDA techniques were identified that provide information about the composition of an assembly. A key point motivating the present research path is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the elemental Pu mass of an assembly and (2) detecting the diversion of a significant number of pins. As such, the focus of this work is determining how to best integrate 2 or 3 techniques into a system that can quantify elemental Pu and to assess how well this system can detect material diversion. Furthermore, it is important economically to down-select among the various techniques before advancing to the experimental phase. In order to achieve this dual goal of integration and down-selection, a Monte Carlo library of PWR assemblies was created and is described in another paper at Global 2009 (Fensin et al.). The research presented here emphasizes integration among techniques. An overview of a five year research plan starting in 2009 is given. Preliminary modeling results for the Monte Carlo assembly library are presented for 3 NDA techniques: Delayed Neutrons, Differential Die-Away, and Nuclear Resonance Fluorescence. As part of the focus on integration, the concept of 'Pu isotopic correlation' is discussed and the role of cooling time determination.

  3. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  4. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  5. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  6. Biliary System Architecture: Experimental Models and Visualization Techniques

    Czech Academy of Sciences Publication Activity Database

    Sarnová, Lenka; Gregor, Martin

    2017-01-01

    Roč. 66, č. 3 (2017), s. 383-390 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LQ1604; GA ČR GA15-23858S Institutional support: RVO:68378050 Keywords : Biliary system * Mouse model * Cholestasis * Visualisation * Morphology Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 1.461, year: 2016

  7. Testing Model with "Check Technique" for Physics Education

    Science.gov (United States)

    Demir, Cihat

    2016-01-01

    As the number, date and form of the written tests are structured and teacher-oriented, it is considered that it creates fear and anxiety among the students. It has been found necessary and important to form a testing model which will keep the students away from the test anxiety and allows them to learn only about the lesson. For this study,…

  8. Matching Games with Additive Externalities

    DEFF Research Database (Denmark)

    Branzei, Simina; Michalak, Tomasz; Rahwan, Talal

    2012-01-01

    Two-sided matchings are an important theoretical tool used to model markets and social interactions. In many real life problems the utility of an agent is influenced not only by their own choices, but also by the choices that other agents make. Such an influence is called an externality. Whereas ...

  9. An Implementation of Bigraph Matching

    DEFF Research Database (Denmark)

    Glenstrup, Arne John; Damgaard, Troels Christoffer; Birkedal, Lars

    We describe a provably sound and complete matching algorithm for bigraphical reactive systems. The algorithm has been implemented in our BPL Tool, a first implementation of bigraphical reactive systems. We describe the tool and present a concrete example of how it can be used to simulate a model...

  10. Data assimilation techniques and modelling uncertainty in geosciences

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available "You cannot step into the same river twice". Perhaps this ancient quote is the best phrase to describe the dynamic nature of the earth system. If we regard the earth as a several mixed systems, we want to know the state of the system at any time. The state could be time-evolving, complex (such as atmosphere or simple and finding the current state requires complete knowledge of all aspects of the system. On one hand, the Measurements (in situ and satellite data are often with errors and incomplete. On the other hand, the modelling cannot be exact; therefore, the optimal combination of the measurements with the model information is the best choice to estimate the true state of the system. Data assimilation (DA methods are powerful tools to combine observations and a numerical model. Actually, DA is an interaction between uncertainty analysis, physical modelling and mathematical algorithms. DA improves knowledge of the past, present or future system states. DA provides a forecast the state of complex systems and better scientific understanding of calibration, validation, data errors and their probability distributions. Nowadays, the high performance and capabilities of DA have led to extensive use of it in different sciences such as meteorology, oceanography, hydrology and nuclear cores. In this paper, after a brief overview of the DA history and a comparison with conventional statistical methods, investigated the accuracy and computational efficiency of two main classical algorithms of DA involving stochastic DA (BLUE and Kalman filter and variational DA (3D and 4D-Var, then evaluated quantification and modelling of the errors. Finally, some of DA applications in geosciences and the challenges facing the DA are discussed.

  11. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  12. Semantic Data Matching: Principles and Performance

    Science.gov (United States)

    Deaton, Russell; Doan, Thao; Schweiger, Tom

    Automated and real-time management of customer relationships requires robust and intelligent data matching across widespread and diverse data sources. Simple string matching algorithms, such as dynamic programming, can handle typographical errors in the data, but are less able to match records that require contextual and experiential knowledge. Latent Semantic Indexing (LSI) (Berry et al. ; Deerwester et al. is a machine intelligence technique that can match data based upon higher order structure, and is able to handle difficult problems, such as words that have different meanings but the same spelling, are synonymous, or have multiple meanings. Essentially, the technique matches records based upon context, or mathematically quantifying when terms occur in the same record.

  13. A Phase Matching, Adiabatic Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Lemery, Francois [Hamburg U.; Flöttmann, Klaus [DESY; Kärtner, Franz [CFEL, Hamburg; Piot, Philippe [Northern Illinois U.

    2017-05-01

    Tabletop accelerators are a thing of the future. Reducing their size will require scaling down electromagnetic wavelengths; however, without correspondingly high field gradients, particles will be more susceptible to phase-slippage – especially at low energy. We investigate how an adiabatically-tapered dielectric-lined waveguide could maintain phase-matching between the accelerating mode and electron bunch. We benchmark our simple model with CST and implement it into ASTRA; finally we provide a first glimpse into the beam dynamics in a phase-matching accelerator.

  14. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  15. Techniques for studies of unbinned model independent CP violation

    Energy Technology Data Exchange (ETDEWEB)

    Bedford, Nicholas; Weisser, Constantin; Parkes, Chris; Gersabeck, Marco; Brodzicka, Jolanta; Chen, Shanzhen [University of Manchester (United Kingdom)

    2016-07-01

    Charge-Parity (CP) violation is a known part of the Standard Model and has been observed and measured in both the B and K meson systems. The observed levels, however, are insufficient to explain the observed matter-antimatter asymmetry in the Universe, and so other sources need to be found. One area of current investigation is the D meson system, where predicted levels of CP violation are much lower than in the B and K meson systems. This means that more sensitive methods are required when searching for CP violation in this system. Several unbinned model independent methods have been proposed for this purpose, all of which need to be optimised and their sensitivities compared.

  16. An open data repository and a data processing software toolset of an equivalent Nordic grid model matched to historical electricity market data.

    Science.gov (United States)

    Vanfretti, Luigi; Olsen, Svein H; Arava, V S Narasimham; Laera, Giuseppe; Bidadfar, Ali; Rabuzin, Tin; Jakobsen, Sigurd H; Lavenius, Jan; Baudette, Maxime; Gómez-López, Francisco J

    2017-04-01

    This article presents an open data repository, the methodology to generate it and the associated data processing software developed to consolidate an hourly snapshot historical data set for the year 2015 to an equivalent Nordic power grid model (aka Nordic 44), the consolidation was achieved by matching the model׳s physical response w.r.t historical power flow records in the bidding regions of the Nordic grid that are available from the Nordic electricity market agent, Nord Pool. The model is made available in the form of CIM v14, Modelica and PSS/E (Siemens PTI) files. The Nordic 44 model in Modelica and PSS/E were first presented in the paper titled "iTesla Power Systems Library (iPSL): A Modelica library for phasor time-domain simulations" (Vanfretti et al., 2016) [1] for a single snapshot. In the digital repository being made available with the submission of this paper (SmarTSLab_Nordic44 Repository at Github, 2016) [2], a total of 8760 snapshots (for the year 2015) that can be used to initialize and execute dynamic simulations using tools compatible with CIM v14, the Modelica language and the proprietary PSS/E tool are provided. The Python scripts to generate the snapshots (processed data) are also available with all the data in the GitHub repository (SmarTSLab_Nordic44 Repository at Github, 2016) [2]. This Nordic 44 equivalent model was also used in iTesla project (iTesla) [3] to carry out simulations within a dynamic security assessment toolset (iTesla, 2016) [4], and has been further enhanced during the ITEA3 OpenCPS project (iTEA3) [5]. The raw, processed data and output models utilized within the iTesla platform (iTesla, 2016) [4] are also available in the repository. The CIM and Modelica snapshots of the "Nordic 44" model for the year 2015 are available in a Zenodo repository.

  17. Optimal Packed String Matching

    DEFF Research Database (Denmark)

    Ben-Kiki, Oren; Bille, Philip; Breslauer, Dany

    2011-01-01

    In the packed string matching problem, each machine word accommodates – characters, thus an n-character text occupies n/– memory words. We extend the Crochemore-Perrin constantspace O(n)-time string matching algorithm to run in optimal O(n/–) time and even in real-time, achieving a factor – speed...

  18. Ontology Matching Across Domains

    Science.gov (United States)

    2010-05-01

    matching include GMO [1], Anchor-Prompt [2], and Similarity Flooding [3]. GMO is an iterative structural matcher, which uses RDF bipartite graphs to...AFRL under contract# FA8750-09-C-0058. References [1] Hu, W., Jian, N., Qu, Y., Wang, Y., “ GMO : a graph matching for ontologies”, in: Proceedings of

  19. Use of artificial intelligence as an innovative donor-recipient matching model for liver transplantation: results from a multicenter Spanish study.

    Science.gov (United States)

    Briceño, Javier; Cruz-Ramírez, Manuel; Prieto, Martín; Navasa, Miguel; Ortiz de Urbina, Jorge; Orti, Rafael; Gómez-Bravo, Miguel-Ángel; Otero, Alejandra; Varo, Evaristo; Tomé, Santiago; Clemente, Gerardo; Bañares, Rafael; Bárcena, Rafael; Cuervas-Mons, Valentín; Solórzano, Guillermo; Vinaixa, Carmen; Rubín, Angel; Colmenero, Jordi; Valdivieso, Andrés; Ciria, Rubén; Hervás-Martínez, César; de la Mata, Manuel

    2014-11-01

    There is an increasing discrepancy between the number of potential liver graft recipients and the number of organs available. Organ allocation should follow the concept of benefit of survival, avoiding human-innate subjectivity. The aim of this study is to use artificial-neural-networks (ANNs) for donor-recipient (D-R) matching in liver transplantation (LT) and to compare its accuracy with validated scores (MELD, D-MELD, DRI, P-SOFT, SOFT, and BAR) of graft survival. 64 donor and recipient variables from a set of 1003 LTs from a multicenter study including 11 Spanish centres were included. For each D-R pair, common statistics (simple and multiple regression models) and ANN formulae for two non-complementary probability-models of 3-month graft-survival and -loss were calculated: a positive-survival (NN-CCR) and a negative-loss (NN-MS) model. The NN models were obtained by using the Neural Net Evolutionary Programming (NNEP) algorithm. Additionally, receiver-operating-curves (ROC) were performed to validate ANNs against other scores. Optimal results for NN-CCR and NN-MS models were obtained, with the best performance in predicting the probability of graft-survival (90.79%) and -loss (71.42%) for each D-R pair, significantly improving results from multiple regressions. ROC curves for 3-months graft-survival and -loss predictions were significantly more accurate for ANN than for other scores in both NN-CCR (AUROC-ANN=0.80 vs. -MELD=0.50; -D-MELD=0.54; -P-SOFT=0.54; -SOFT=0.55; -BAR=0.67 and -DRI=0.42) and NN-MS (AUROC-ANN=0.82 vs. -MELD=0.41; -D-MELD=0.47; -P-SOFT=0.43; -SOFT=0.57, -BAR=0.61 and -DRI=0.48). ANNs may be considered a powerful decision-making technology for this dataset, optimizing the principles of justice, efficiency and equity. This may be a useful tool for predicting the 3-month outcome and a potential research area for future D-R matching models. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights

  20. Sequence Matching Analysis for Curriculum Development

    Directory of Open Access Journals (Sweden)

    Liem Yenny Bendatu

    2015-06-01

    Full Text Available Many organizations apply information technologies to support their business processes. Using the information technologies, the actual events are recorded and utilized to conform with predefined model. Conformance checking is an approach to measure the fitness and appropriateness between process model and actual events. However, when there are multiple events with the same timestamp, the traditional approach unfit to result such measures. This study attempts to develop a sequence matching analysis. Considering conformance checking as the basis of this approach, this proposed approach utilizes the current control flow technique in process mining domain. A case study in the field of educational process has been conducted. This study also proposes a curriculum analysis framework to test the proposed approach. By considering the learning sequence of students, it results some measurements for curriculum development. Finally, the result of the proposed approach has been verified by relevant instructors for further development.

  1. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  2. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  3. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  4. Application of nonliner reduction techniques in chemical process modeling: a review

    International Nuclear Information System (INIS)

    Muhaimin, Z; Aziz, N.; Abd Shukor, S.R.

    2006-01-01

    Model reduction techniques have been used widely in engineering fields for electrical, mechanical as well as chemical engineering. The basic idea of reduction technique is to replace the original system by an approximating system with much smaller state-space dimension. A reduced order model is more beneficial to process and industrial field in terms of control purposes. This paper is to provide a review on application of nonlinear reduction techniques in chemical processes. The advantages and disadvantages of each technique reviewed are also highlighted

  5. Nuclear fuel cycle optimization - methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)

  6. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    2000-10-01

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  7. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.

    Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  8. Fuel element transfer cask modelling using MCNP technique

    International Nuclear Information System (INIS)

    Rosli Darmawan

    2009-01-01

    Full text: After operating for more than 25 years, some of the Reaktor TRIGA PUSPATI (RTP) fuel elements would have been depleted. A few addition and fuel reconfiguration exercises have to be conducted in order to maintain RTP capacity. Presently, RTP spent fuels are stored at the storage area inside RTP tank. The need to transfer the fuel element outside of RTP tank may be prevalence in the near future. The preparation shall be started from now. A fuel element transfer cask has been designed according to the recommendation by the fuel manufacturer and experience of other countries. A modelling using MCNP code has been conducted to analyse the design. The result shows that the design of transfer cask fuel element is safe for handling outside the RTP tank according to recent regulatory requirement. (author)

  9. Fuel Element Transfer Cask Modelling Using MCNP Technique

    International Nuclear Information System (INIS)

    Darmawan, Rosli; Topah, Budiman Naim

    2010-01-01

    After operating for more than 25 years, some of the Reaktor TRIGA Puspati (RTP) fuel elements would have been depleted. A few addition and fuel reconfiguration exercises have to be conducted in order to maintain RTP capacity. Presently, RTP spent fuels are stored at the storage area inside RTP tank. The need to transfer the fuel element outside of RTP tank may be prevalence in the near future. The preparation shall be started from now. A fuel element transfer cask has been designed according to the recommendation by the fuel manufacturer and experience of other countries. A modelling using MCNP code has been conducted to analyse the design. The result shows that the design of transfer cask fuel element is safe for handling outside the RTP tank according to recent regulatory requirement.

  10. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  11. Multivariate moment closure techniques for stochastic kinetic models

    International Nuclear Information System (INIS)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-01-01

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs

  12. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  13. High-efficiency resonant coupled wireless power transfer via tunable impedance matching

    Science.gov (United States)

    Anowar, Tanbir Ibne; Barman, Surajit Das; Wasif Reza, Ahmed; Kumar, Narendra

    2017-10-01

    For magnetic resonant coupled wireless power transfer (WPT), the axial movement of near-field coupled coils adversely degrades the power transfer efficiency (PTE) of the system and often creates sub-resonance. This paper presents a tunable impedance matching technique based on optimum coupling tuning to enhance the efficiency of resonant coupled WPT system. The optimum power transfer model is analysed from equivalent circuit model via reflected load principle, and the adequate matching are achieved through the optimum tuning of coupling coefficients at both the transmitting and receiving end of the system. Both simulations and experiments are performed to evaluate the theoretical model of the proposed matching technique, and results in a PTE over 80% at close coil proximity without shifting the original resonant frequency. Compared to the fixed coupled WPT, the extracted efficiency shows 15.1% and 19.9% improvements at the centre-to-centre misalignment of 10 and 70 cm, respectively. Applying this technique, the extracted S21 parameter shows more than 10 dB improvements at both strong and weak couplings. Through the developed model, the optimum coupling tuning also significantly improves the performance over matching techniques using frequency tracking and tunable matching circuits.

  14. Pediatric MATCH Infographic

    Science.gov (United States)

    Infographic explaining NCI-COG Pediatric MATCH, a cancer treatment clinical trial for children and adolescents, from 1 to 21 years of age, that is testing the use of precision medicine for pediatric cancers.

  15. Data Matching Imputation System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The DMIS dataset is a flat file record of the matching of several data set collections. Primarily it consists of VTRs, dealer records, Observer data in conjunction...

  16. Pre-analysis techniques applied to area-based correlation aiming Digital Terrain Model generation

    Directory of Open Access Journals (Sweden)

    Maurício Galo

    2005-12-01

    Full Text Available Area-based matching is an useful procedure in some photogrammetric processes and its results are of crucial importance in applications such as relative orientation, phototriangulation and Digital Terrain Model generation. The successful determination of correspondence depends on radiometric and geometric factors. Considering these aspects, the use of procedures that previously estimate the quality of the parameters to be computed is a relevant issue. This paper describes these procedures and it is shown that the quality prediction can be computed before performing matching by correlation, trough the analysis of the reference window. This procedure can be incorporated in the correspondence process for Digital Terrain Model generation and Phototriangulation. The proposed approach comprises the estimation of the variance matrix of the translations from the gray levels in the reference window and the reduction of the search space using the knowledge of the epipolar geometry. As a consequence, the correlation process becomes more reliable, avoiding the application of matching procedures in doubtful areas. Some experiments with simulated and real data are presented, evidencing the efficiency of the studied strategy.

  17. Magnetic safety matches

    Science.gov (United States)

    Lindén, J.; Lindberg, M.; Greggas, A.; Jylhävuori, N.; Norrgrann, H.; Lill, J. O.

    2017-07-01

    In addition to the main ingredients; sulfur, potassium chlorate and carbon, ordinary safety matches contain various dyes, glues etc, giving the head of the match an even texture and appealing color. Among the common reddish-brown matches there are several types, which after ignition can be attracted by a strong magnet. Before ignition the match head is generally not attracted by the magnet. An elemental analysis based on proton-induced x-ray emission was performed to single out iron as the element responsible for the observed magnetism. 57Fe Mössbauer spectroscopy was used for identifying the various types of iron-compounds, present before and after ignition, responsible for the macroscopic magnetism: Fe2O3 before and Fe3O4 after. The reaction was verified by mixing the main chemicals in the match-head with Fe2O3 in glue and mounting the mixture on a match stick. The ash residue after igniting the mixture was magnetic.

  18. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Energy Technology Data Exchange (ETDEWEB)

    Amicarelli, A; Pelliccioni, A [ISPESL - Dipartimento Insediamenti Produttivi e Interazione con l' Ambiente, Via Fontana Candida, 1 00040 Monteporzio Catone (RM) Italy (Italy); Finardi, S; Silibello, C [ARIANET, via Gilino 9, 20128 Milano (Italy); Gariazzo, C

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM{sub 10} concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  19. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Science.gov (United States)

    Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  20. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    International Nuclear Information System (INIS)

    Amicarelli, A; Pelliccioni, A; Finardi, S; Silibello, C; Gariazzo, C

    2008-01-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM 10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode

  1. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  2. ECMOR 4. 4th European conference on the mathematics of oil recovery. Topic E: History match and recovery optimization. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1994-01-01

    The report with collected proceedings from a conference, deals with mathematics of oil recovery with the focus on history match and recovery optimization. Topics of proceedings are as follow: Calculating optimal parameters for history matching; new technique to improve the efficiency of history matching of full-field models; flow constrained reservoir characterization using Bayesian inversion; analysis of multi-well pressure transient data; new approach combining neural networks and simulated annealing for solving petroleum inverse problems; automatic history matching by use of response surfaces and experimental design; determining the optimum location of a production well in oil reservoirs. Seven papers are prepared. 108 refs., 45 figs., 12 tabs.

  3. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  4. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.; Hoteit, Ibrahim; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A.; Schumacher, M.; Pattiaratchi, C.

    2017-01-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques

  5. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  6. Efficient generation of patient-matched malignant and normal primary cell cultures from clear cell renal cell carcinoma patients: clinically relevant models for research and personalized medicine

    International Nuclear Information System (INIS)

    Lobo, Nazleen C.; Gedye, Craig; Apostoli, Anthony J.; Brown, Kevin R.; Paterson, Joshua; Stickle, Natalie; Robinette, Michael; Fleshner, Neil; Hamilton, Robert J.; Kulkarni, Girish; Zlotta, Alexandre; Evans, Andrew; Finelli, Antonio; Moffat, Jason; Jewett, Michael A. S.; Ailles, Laurie

    2016-01-01

    Patients with clear cell renal cell carcinoma (ccRCC) have few therapeutic options, as ccRCC is unresponsive to chemotherapy and is highly resistant to radiation. Recently targeted therapies have extended progression-free survival, but responses are variable and no significant overall survival benefit has been achieved. Commercial ccRCC cell lines are often used as model systems to develop novel therapeutic approaches, but these do not accurately recapitulate primary ccRCC tumors at the genomic and transcriptional levels. Furthermore, ccRCC exhibits significant intertumor genetic heterogeneity, and the limited cell lines available fail to represent this aspect of ccRCC. Our objective was to generate accurate preclinical in vitro models of ccRCC using tumor tissues from ccRCC patients. ccRCC primary single cell suspensions were cultured in fetal bovine serum (FBS)-containing media or defined serum-free media. Established cultures were characterized by genomic verification of mutations present in the primary tumors, expression of renal epithelial markers, and transcriptional profiling. The apparent efficiency of primary cell culture establishment was high in both culture conditions, but genotyping revealed that the majority of cultures contained normal, not cancer cells. ccRCC characteristically shows biallelic loss of the von Hippel Lindau (VHL) gene, leading to accumulation of hypoxia-inducible factor (HIF) and expression of HIF target genes. Purification of cells based on expression of carbonic anhydrase IX (CA9), a cell surface HIF target, followed by culture in FBS enabled establishment of ccRCC cell cultures with an efficiency of >80 %. Culture in serum-free conditions selected for growth of normal renal proximal tubule epithelial cells. Transcriptional profiling of ccRCC and matched normal cell cultures identified up- and down-regulated networks in ccRCC and comparison to The Cancer Genome Atlas confirmed the clinical validity of our cell cultures. The ability

  7. Latent palmprint matching.

    Science.gov (United States)

    Jain, Anil K; Feng, Jianjiang

    2009-06-01

    The evidential value of palmprints in forensic applications is clear as about 30 percent of the latents recovered from crime scenes are from palms. While biometric systems for palmprint-based personal authentication in access control type of applications have been developed, they mostly deal with low-resolution (about 100 ppi) palmprints and only perform full-to-full palmprint matching. We propose a latent-to-full palmprint matching system that is needed in forensic applications. Our system deals with palmprints captured at 500 ppi (the current standard in forensic applications) or higher resolution and uses minutiae as features to be compatible with the methodology used by latent experts. Latent palmprint matching is a challenging problem because latent prints lifted at crime scenes are of poor image quality, cover only a small area of the palm, and have a complex background. Other difficulties include a large number of minutiae in full prints (about 10 times as many as fingerprints), and the presence of many creases in latents and full prints. A robust algorithm to reliably estimate the local ridge direction and frequency in palmprints is developed. This facilitates the extraction of ridge and minutiae features even in poor quality palmprints. A fixed-length minutia descriptor, MinutiaCode, is utilized to capture distinctive information around each minutia and an alignment-based minutiae matching algorithm is used to match two palmprints. Two sets of partial palmprints (150 live-scan partial palmprints and 100 latent palmprints) are matched to a background database of 10,200 full palmprints to test the proposed system. Despite the inherent difficulty of latent-to-full palmprint matching, rank-1 recognition rates of 78.7 and 69 percent, respectively, were achieved in searching live-scan partial palmprints and latent palmprints against the background database.

  8. University Reactor Matching Grants Program

    International Nuclear Information System (INIS)

    John Valentine; Farzad Rahnema; Said Abdel-Khalik

    2003-01-01

    During the 2002 Fiscal year, funds from the DOE matching grant program, along with matching funds from the industrial sponsors, have been used to support research in the area of thermal-hydraulics. Both experimental and numerical research projects have been performed. Experimental research focused on two areas: (1) Identification of the root cause mechanism for axial offset anomaly in pressurized water reactors under prototypical reactor conditions, and (2) Fluid dynamic aspects of thin liquid film protection schemes for inertial fusion reactor chambers. Numerical research focused on two areas: (1) Multi-fluid modeling of both two-phase and two-component flows for steam conditioning and mist cooling applications, and (2) Modeling of bounded Rayleigh-Taylor instability with interfacial mass transfer and fluid injection through a porous wall simulating the ''wetted wall'' protection scheme in inertial fusion reactor chambers. Details of activities in these areas are given

  9. Sabots, Obturator and Gas-In-Launch Tube Techniques for Heat Flux Models in Ballistic Ranges

    Science.gov (United States)

    Bogdanoff, David W.; Wilder, Michael C.

    2013-01-01

    For thermal protection system (heat shield) design for space vehicle entry into earth and other planetary atmospheres, it is essential to know the augmentation of the heat flux due to vehicle surface roughness. At the NASA Ames Hypervelocity Free Flight Aerodynamic Facility (HFFAF) ballistic range, a campaign of heat flux studies on rough models, using infrared camera techniques, has been initiated. Several phenomena can interfere with obtaining good heat flux data when using this measuring technique. These include leakage of the hot drive gas in the gun barrel through joints in the sabot (model carrier) to create spurious thermal imprints on the model forebody, deposition of sabot material on the model forebody, thereby changing the thermal properties of the model surface and unknown in-barrel heating of the model. This report presents developments in launch techniques to greatly reduce or eliminate these problems. The techniques include the use of obturator cups behind the launch package, enclosed versus open front sabot designs and the use of hydrogen gas in the launch tube. Attention also had to be paid to the problem of the obturator drafting behind the model and impacting the model. Of the techniques presented, the obturator cups and hydrogen in the launch tube were successful when properly implemented

  10. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  11. Real-time eSports Match Result Prediction

    OpenAIRE

    Yang, Yifan; Qin, Tian; Lei, Yu-Heng

    2016-01-01

    In this paper, we try to predict the winning team of a match in the multiplayer eSports game Dota 2. To address the weaknesses of previous work, we consider more aspects of prior (pre-match) features from individual players' match history, as well as real-time (during-match) features at each minute as the match progresses. We use logistic regression, the proposed Attribute Sequence Model, and their combinations as the prediction models. In a dataset of 78362 matches where 20631 matches contai...

  12. Mix-and-match holography

    KAUST Repository

    Peng, Yifan; Dun, Xiong; Sun, Qilin; Heidrich, Wolfgang

    2017-01-01

    target images into pairs of front and rear phase-distorting surfaces. Different target holograms can be decoded by mixing and matching different front and rear surfaces under specific geometric alignments. Our approach, which we call mixWe derive a detailed image formation model for the setting of holographic projection displays, as well as a multiplexing method based on a combination of phase retrieval methods and complex matrix factorization. We demonstrate several application scenarios in both simulation and physical prototypes.

  13. Equilibrium and matching under price controls

    NARCIS (Netherlands)

    Herings, P.J.J.

    2015-01-01

    The paper considers a one-to-one matching with contracts model in the presence of price controls. This set-up contains two important streams in the matching literature, those with and those without monetary transfers, as special cases and allows for intermediate cases with some restrictions on the

  14. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  15. MR angiography with a matched filter

    International Nuclear Information System (INIS)

    De Castro, J.B.; Riederer, S.J.; Lee, J.N.

    1987-01-01

    The technique of matched filtering was applied to a series of cine MR images. The filter was devised to yield a subtraction angiographic image in which direct current components present in the cine series are removed and the signal-to-noise ratio (S/N) of the vascular structures is optimized. The S/N of a matched filter was compared with that of a simple subtraction, in which an image with high flow is subtracted from one with low flow. Experimentally, a range of results from minimal improvement to significant (60%) improvement in S/N was seen in the comparisons of matched filtered subtraction with simple subtraction

  16. THE Economics of Match-Fixing

    OpenAIRE

    Caruso, Raul

    2007-01-01

    The phenomenon of match-fixing does constitute a constant element of sport contests. This paper presents a simple formal model in order to explain it. The intuition behind is that an asymmetry in the evaluation of the stake is the key factor leading to match-fixing. In sum, this paper considers a partial equilibrium model of contest where two asymmetric, rational and risk-neutral opponents evaluate differently a contested stake. Differently from common contest models, agents have the option ...

  17. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  18. Myocardium tracking via matching distributions.

    Science.gov (United States)

    Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali

    2009-01-01

    The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not

  19. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  20. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    Science.gov (United States)

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  1. Presentation Technique

    International Nuclear Information System (INIS)

    Froejmark, M.

    1992-10-01

    The report presents a wide, easily understandable description of presentation technique and man-machine communication. General fundamentals for the man-machine interface are illustrated, and the factors that affect the interface are described. A model is presented for describing the operators work situation, based on three different levels in the operators behaviour. The operator reacts routinely in the face of simple, known problems, and reacts in accordance with predetermined plans in the face of more complex, recognizable problems. Deep fundamental knowledge is necessary for truly complex questions. Today's technical status and future development have been studied. In the future, the operator interface will be based on standard software. Functions such as zooming, integration of video pictures, and sound reproduction will become common. Video walls may be expected to come into use in situations in which several persons simultaneously need access to the same information. A summary of the fundamental rules for the design of good picture ergonomics and design requirements for control rooms are included in the report. In conclusion, the report describes a presentation technique within the Distribution Automation and Demand Side Management area and analyses the know-how requirements within Vattenfall. If different systems are integrated, such as geographical information systems and operation monitoring systems, strict demands are made on the expertise of the users for achieving a user-friendly technique which is matched to the needs of the human being. (3 figs.)

  2. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  3. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  4. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  5. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  6. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  7. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    International Nuclear Information System (INIS)

    Andrei, Petru; Oniciuc, Liviu; Stancu, Alexandru; Stoleriu, Laurentiu

    2007-01-01

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented

  8. Quantification of intervertebral displacement with a novel MRI-based modeling technique: Assessing measurement bias and reliability with a porcine spine model.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Goubeaux, Craig; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian C

    2017-05-01

    The purpose of this study was to develop a novel magnetic resonance imaging (MRI)-based modeling technique for measuring intervertebral displacements. Here, we present the measurement bias and reliability of the developmental work using a porcine spine model. Porcine lumbar vertebral segments were fitted in a custom-built apparatus placed within an externally calibrated imaging volume of an open-MRI scanner. The apparatus allowed movement of the vertebrae through pre-assigned magnitudes of sagittal and coronal translation and rotation. The induced displacements were imaged with static (T 1 ) and fast dynamic (2D HYCE S) pulse sequences. These images were imported into animation software, in which these images formed a background 'scene'. Three-dimensional models of vertebrae were created using static axial scans from the specimen and then transferred into the animation environment. In the animation environment, the user manually moved the models (rotoscoping) to perform model-to-'scene' matching to fit the models to their image silhouettes and assigned anatomical joint axes to the motion-segments. The animation protocol quantified the experimental translation and rotation displacements between the vertebral models. Accuracy of the technique was calculated as 'bias' using a linear mixed effects model, average percentage error and root mean square errors. Between-session reliability was examined by computing intra-class correlation coefficients (ICC) and the coefficient of variations (CV). For translation trials, a constant bias (β 0 ) of 0.35 (±0.11) mm was detected for the 2D HYCE S sequence (p=0.01). The model did not demonstrate significant additional bias with each mm increase in experimental translation (β 1 Displacement=0.01mm; p=0.69). Using the T 1 sequence for the same assessments did not significantly change the bias (p>0.05). ICC values for the T 1 and 2D HYCE S pulse sequences were 0.98 and 0.97, respectively. For rotation trials, a constant bias (

  9. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  10. Dewey Concentration Match.

    Science.gov (United States)

    School Library Media Activities Monthly, 1996

    1996-01-01

    Giving students a chance to associate numbers with subjects can be useful in speeding their location of desired print or nonprint materials and helping students feel independent when browsing. A matching game for helping students learn the Dewey numbers is presented. Instructions for the library media specialist or teacher, instructions for…

  11. Polytypic pattern matching

    NARCIS (Netherlands)

    Jeuring, J.T.

    1995-01-01

    The pattern matching problem can be informally specified as follows: given a pattern and a text, find all occurrences of the pattern in the text. The pattern and the text may both be lists, or they may both be trees, or they may both be multi-dimensional arrays, etc. This paper describes a general

  12. Is Matching Innate?

    Science.gov (United States)

    Gallistel, C. R.; King, Adam Philip; Gottlieb, Daniel; Balci, Fuat; Papachristos, Efstathios B.; Szalecki, Matthew; Carbone, Kimberly S.

    2007-01-01

    Experimentally naive mice matched the proportions of their temporal investments (visit durations) in two feeding hoppers to the proportions of the food income (pellets per unit session time) derived from them in three experiments that varied the coupling between the behavioral investment and food income, from no coupling to strict coupling.…

  13. Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering

    Science.gov (United States)

    Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.

    2018-04-01

    Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.

  14. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  15. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  16. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  17. Matching Supernovae to Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2016-12-01

    developed a new automated algorithm for matching supernovae to their host galaxies. Their work builds on currently existing algorithms and makes use of information about the nearby galaxies, accounts for the uncertainty of the match, and even includes a machine learning component to improve the matching accuracy.Gupta and collaborators test their matching algorithm on catalogs of galaxies and simulated supernova events to quantify how well the algorithm is able to accurately recover the true hosts.Successful MatchingThe matching algorithms accuracy (purity) as a function of the true supernova-host separation, the supernova redshift, the true hosts brightness, and the true hosts size. [Gupta et al. 2016]The authors find that when the basic algorithm is run on catalog data, it matches supernovae to their hosts with 91% accuracy. Including the machine learning component, which is run after the initial matching algorithm, improves the accuracy of the matching to 97%.The encouraging results of this work which was intended as a proof of concept suggest that methods similar to this could prove very practical for tackling future survey data. And the method explored here has use beyond matching just supernovae to their host galaxies: it could also be applied to other extragalactic transients, such as gamma-ray bursts, tidal disruption events, or electromagnetic counterparts to gravitational-wave detections.CitationRavi R. Gupta et al 2016 AJ 152 154. doi:10.3847/0004-6256/152/6/154

  18. Application of data assimilation technique for flow field simulation for Kaiga site using TAPM model

    International Nuclear Information System (INIS)

    Shrivastava, R.; Oza, R.B.; Puranik, V.D.; Hegde, M.N.; Kushwaha, H.S.

    2008-01-01

    The data assimilation techniques are becoming popular nowadays to get realistic flow field simulation for the site under consideration. The present paper describes data assimilation technique for flow field simulation for Kaiga site using the air pollution model (TAPM) developed by CSIRO, Australia. In this, the TAPM model was run for Kaiga site for a period of one month (Nov. 2004) using the analysed meteorological data supplied with the model for Central Asian (CAS) region and the model solutions were nudged with the observed wind speed and wind direction data available for the site. The model was run with 4 nested grids with grid spacing varying from 30km, 10km, 3km and 1km respectively. The models generated results with and without nudging are statistically compared with the observations. (author)

  19. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  20. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  1. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  2. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    OpenAIRE

    Ma, Yuanyuan; Yang, Yi; Mai, Xiaoping; Qiu, Chongjian; Long, Xiao; Wang, Chenghai

    2016-01-01

    To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and s...

  3. Using a Geospatial Model to Relate Fluvial Geomorphology to Macroinvertebrate Habitat in a Prairie River—Part 2: Matching Family-Level Indices to Geomorphological Response Units (GRUs

    Directory of Open Access Journals (Sweden)

    Anna Grace Nostbakken Meissner

    2016-03-01

    Full Text Available Many rivers are intensely managed due to anthropogenic influences such as dams, channelization, and water provision for municipalities, agriculture, and industry. With this growing pressure on fluvial systems comes a greater need to evaluate the state of their ecosystems. The purpose of this research is to use a geospatial model of the Qu’Appelle River in Saskatchewan to distinguish instream macroinvertebrate habitats at the family level. River geomorphology was assessed through the use of ArcGIS and digital elevation models; with these tools, the sinuosity, slope, fractal dimension, and stream width of the river were processed. Subsequently, Principal Component Analysis, a clustering technique, revealed areas with similar sets of geomorphological characteristics. These similar typology sequences were then grouped into geomorphological response units (GRUs, designated a color, and mapped into a geospatial model. Macroinvertebrate data was then incorporated to reveal several relationships to the model. For instance, certain GRUs contained more highly sensitive species and healthier diversity levels than others. Future possibilities for expanding on this project include incorporating stable isotope data to evaluate the food-web structure within the river basin. Although GRUs have been very successful in identifying fish habitats in other studies, the macroinvertebrates may be too sessile and their habitat too localized to be identified by such large river units. Units may need to be much shorter (250 m to better identify macroinvertebrate habitat.

  4. INFORMATION SYSTEMS AUDIT CURRICULA CONTENT MATCHING

    OpenAIRE

    Vasile-Daniel CARDOȘ; Ildikó Réka CARDOȘ

    2014-01-01

    Financial and internal auditors must cope with the challenge of performing their mission in technology enhanced environment. In this article we match the information technology description found in the International Federation of Accountants (IFAC) and the Institute of Internal Auditors (IIA) curricula against the Model Curriculum issued by the Information Systems Audit and Control Association (ISACA). By reviewing these three curricula, we matched the content in the ISACA Model Curriculum wi...

  5. An experimental technique for the modelling of air flow movements in nuclear plant

    International Nuclear Information System (INIS)

    Ainsworth, R.W.; Hallas, N.J.

    1986-01-01

    This paper describes an experimental technique developed at Harwell to model ventilation flows in plant at 1/5th scale. The technique achieves dynamic similarity not only for forced convection imposed by the plant ventilation system, but also for the interaction between natural convection (from heated objects) and forced convection. The use of a scale model to study flow of fluids is a well established technique, relying upon various criteria, expressed in terms of dimensionless numbers, to achieve dynamic similarity. For forced convective flows, simulation of Reynolds number is sufficient, but to model natural convection and its interaction with forced convection, the Rayleigh, Grashof and Prandtl numbers must be simulated at the same time. This paper describes such a technique, used in experiments on a hypothetical glove box cell to study the interaction between forced and natural convection. The model contained features typically present in a cell, such as a man, motor, stairs, glove box, etc. The aim of the experiment was to study the overall flow patterns, especially around the model man 'working' at the glove box. The cell ventilation was theoretically designed to produce a downward flow over the face of the man working at the glove box. However, the results have shown that the flow velocities produced an upwards flow over the face of the man. The work has indicated the viability of modelling simultaneously the forced and natural convection processes in a cell. It has also demonstrated that simplistic assumptions cannot be made about ventilation flow patterns. (author)

  6. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  7. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    -UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...... the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper...

  8. Adaptive Discrete Hypergraph Matching.

    Science.gov (United States)

    Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao

    2018-02-01

    This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.

  9. Electromagnetic wave matching device

    International Nuclear Information System (INIS)

    Hirata, Yosuke; Mitsunaka, Yoshika; Hayashi, Ken-ichi; Ito, Yasuyuki.

    1997-01-01

    The present invention provides a matching device capable of increasing an efficiency of combining beams of electromagnetic waves outputted from an output window of a gyrotron which is expected for plasma heating of a thermonuclear reactor and an electromagnetic wave transmission system as high as possible. Namely, an electromagnetic wave matching device reflects beams of electromagnetic waves incident from an inlet by a plurality of phase correction mirrors and combines them to an external transmission system through an exit. In this case, the phase correction mirrors change the phase of the beams of electromagnetic waves incident to the phase correction mirrors by a predetermined amount corresponding to the position of the reflection mirrors. Then, the beams of electromagnetic waves outputted, for example, from a gyrotron can properly be shaped as desired for the intensity and the phase. As a result, combination efficiency with the transmission system can be increased. (I.S.)

  10. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  11. Applications of the soft computing in the automated history matching

    Energy Technology Data Exchange (ETDEWEB)

    Silva, P.C.; Maschio, C.; Schiozer, D.J. [Unicamp (Brazil)

    2006-07-01

    Reservoir management is a research field in petroleum engineering that optimizes reservoir performance based on environmental, political, economic and technological criteria. Reservoir simulation is based on geological models that simulate fluid flow. Models must be constantly corrected to yield the observed production behaviour. The process of history matching is controlled by the comparison of production data, well test data and measured data from simulations. Parametrization, objective function analysis, sensitivity analysis and uncertainty analysis are important steps in history matching. One of the main challenges facing automated history matching is to develop algorithms that find the optimal solution in multidimensional search spaces. Optimization algorithms can be either global optimizers that work with noisy multi-modal functions, or local optimizers that cannot work with noisy multi-modal functions. The problem with global optimizers is the very large number of function calls, which is an inconvenience due to the long reservoir simulation time. For that reason, techniques such as least squared, thin plane spline, kriging and artificial neural networks (ANN) have been used as substitutes to reservoir simulators. This paper described the use of optimization algorithms to find optimal solution in automated history matching. Several ANN were used, including the generalized regression neural network, fuzzy system with subtractive clustering and radial basis network. The UNIPAR soft computing method was used along with a modified Hooke- Jeeves optimization method. Two case studies with synthetic and real reservoirs are examined. It was concluded that the combination of global and local optimization has the potential to improve the history matching process and that the use of substitute models can reduce computational efforts. 15 refs., 11 figs.

  12. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  13. Exploiting Best-Match Equations for Efficient Reinforcement Learning

    NARCIS (Netherlands)

    van Seijen, Harm; Whiteson, Shimon; van Hasselt, Hado; Wiering, Marco

    This article presents and evaluates best-match learning, a new approach to reinforcement learning that trades off the sample efficiency of model-based methods with the space efficiency of model-free methods. Best-match learning works by approximating the solution to a set of best-match equations,

  14. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  15. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  16. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  17. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    . The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...... that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem...

  18. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  19. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  20. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques